Security Stop Press : GhostGPT AI Chatbot Threat

Cybercriminals are using an AI chatbot called GhostGPT to generate malware, craft phishing emails, and develop exploit code, according to a recent blog post by security firm Abnormal Security.

Unlike mainstream AI tools, GhostGPT has no ethical safeguards, making it a powerful tool for cybercrime.

Available as a Telegram bot, GhostGPT provides instant, uncensored responses and has a strict no-logs policy, making it easy for attackers to use while remaining anonymous. Despite being advertised for “cybersecurity,” it is openly sold on cybercrime forums, with subscriptions starting at $50 per week.

GhostGPT follows a growing trend of AI-powered cybercrime tools, including WormGPT and WolfGPT, which have made attacks more sophisticated and accessible. Security experts warn that by removing ethical restrictions, these chatbots allow criminals to create highly convincing phishing scams, develop malware that evades detection, and exploit software vulnerabilities with minimal effort.

With AI now being used to bypass traditional defences, businesses must adapt their security strategies. Implementing AI-driven threat detection, strengthening email security, and training employees to recognise phishing attempts are essential to mitigating the risks posed by tools like GhostGPT.