Tagged as LLM

Featured Article : Security Risk From Hidden Backdoors In AI Models

Published 12 February 2026

Recent research shows that AI large language models (LLMs) can be quietly poisoned during training with hidden backdoors that create a serious and hard to detect supply chain security risk for organisations deploying them. Sleeper Agent Backdoors Researchers say sleeper agent backdoors in LLMs pose a security risk to organisations […]

Posted in News | Also tagged , , ,

Tech Insight : Why Teaching AI Bad Behaviour Can Spread Beyond Its Original Task

Published 21 January 2026

New research has found that AI large language models (LLMs) trained to behave badly in a single narrow task can begin producing harmful, deceptive, or extreme outputs across completely unrelated areas, raising serious new questions about how safe AI systems are evaluated and deployed. A Surprising Safety Failure in Modern […]

Posted in News | Also tagged ,