Security Stop-Press: AI Chatbots Are Linking Users to Scam Sites
Chatbots powered by large language models (LLMs) are giving out fake or incorrect login URLs, exposing users to phishing risks, according to research from cybersecurity firm Netcraft. In tests using GPT-4.1 models, including Perplexity and Microsoft Copilot, only 66 per cent of login links provided were correct. The rest pointed to inactive, unrelated, or unclaimed domains that scammers could…