Recent research shows that AI large language models (LLMs) can be quietly poisoned during training with hidden backdoors that create a serious and hard to detect supply chain security risk for organisations deploying them. Sleeper Agent Backdoors Researchers say sleeper agent backdoors in LLMs pose a security risk to organisations […]
Posted in News Also tagged AI, LLM, Models, Security