Security Stop Press : LLM Malicious “Prompt Injection” Attack Warning
The UK’s National Cyber Security Centre (NCSC) has warned of the susceptibility of existing Large Language Models (LLMs) to malicious “prompt injection” attacks. These are where a user creates inputs intended to cause an AI model to behave in an unintended way e.g., generating offensive content or disclosing confidential information. This means that businesses integrating LLMs like ChatGPT into…