
New research shows that one in four European organisations have banned Elon Musk’s Grok AI chatbot due to concerns over misinformation, data privacy and reputational risk, making it far more widely rejected than rival tools like ChatGPT or Gemini.
A Trust Gap Is Emerging in the AI Race
The findings from cybersecurity firm Netskope point to a growing shift in how European businesses are evaluating generative AI tools. While platforms like ChatGPT and Gemini continue to gain traction, Grok’s higher rate of rejection suggests that organisations are becoming more selective and are prioritising transparency, reliability and alignment with company values over novelty or brand recognition.
What Is Grok?
Grok is a generative AI chatbot developed by Elon Musk’s company xAI and built into X, the social media platform formerly known as Twitter. Marketed as a bold, “truth-seeking” alternative to mainstream AI tools, Grok is designed to answer user prompts in real time with internet-connected responses. However, a series of controversial and misleading outputs (along with a lack of transparency about how it handles user data and trains its model) have made many organisations wary of its use.
Grok’s Risk Profile Raises Red Flags
While most generative AI tools are being rapidly adopted in European workplaces, Grok appears to be the exception. For example, Netskope’s latest threat report reveals that 25 per cent of European organisations have now blocked the app at network level. In contrast, only 9.8 per cent have blocked OpenAI’s ChatGPT, and just 9.2 per cent have done the same with Google Gemini.
Content Moderation Issue
Part of the issue appears to lie in Grok’s content moderation, or lack thereof. For example, the chatbot has made headlines for spreading inflammatory and false claims, including the promotion of a “white genocide” conspiracy theory in South Africa and casting doubt on key facts about the Holocaust. These incidents appear to have deeply shaken confidence in the platform’s ethical safeguards and prompted scrutiny around how the model handles prompts, training data and user inputs.
Companies More Selective About AI Tools
Gianpietro Cutolo, a cloud threat researcher at Netskope, said the bans on Grok highlight a growing awareness of the risks linked to generative AI. As he explained, organisations are starting to draw clearer lines between different platforms based on how they handle security and compliance. “They’re becoming more savvy that not all AI is equal when it comes to data security,” he said, noting that concerns around reputation, regulation and data protection are now shaping AI adoption decisions.
Privacy and Transparency
Neil Thacker, Netskope’s Global Privacy and Data Protection Officer, believes the trend is indicative of a broader shift in how European firms assess digital tools. “Businesses are becoming aware that not all apps are the same in the way they handle data privacy, ownership of data that is shared with the app, or in how much detail they reveal about the way they train the model with any data that is shared within prompts,” he said.
This appears to be particularly relevant in Europe, where GDPR sets strict requirements on how personal and sensitive data can be used. Grok’s relative lack of clarity over what it does with user input, especially in enterprise contexts, appears to have tipped the scales for many firms.
It also doesn’t help that Grok is closely tied to X, a platform currently under EU investigation for failing to tackle disinformation under the Digital Services Act. The crossover has raised uncomfortable questions about how data might be shared or leveraged across Musk’s various companies.
Not The Only One Blocked
Despite its controversial reputation, it seems that Grok is far from alone in being blocked. The most blacklisted generative AI app in Europe is Stable Diffusion, an image generator from UK-based Stability AI, which is blocked by 41 per cent of organisations due to privacy and licensing concerns.
However, Grok’s fall from grace stands out because of how stark the contrast is with its peers. ChatGPT, for instance, remains by far the most widely used generative AI chatbot in Europe. Netskope’s report found that 91 per cent of European firms now use some form of cloud-based GenAI tool in their operations, suggesting that the appetite for AI is strong, but users are choosing carefully.
The relative trust in OpenAI and Google reflects the degree to which those platforms have invested in transparency, compliance documentation, and enterprise safeguards. Features such as business-specific data privacy settings, clearer disclosures on training practices, and regulated API access have helped cement their position as ‘safe bets’ in regulated industries.
Musk’s Reputation
There’s also a reputational issue at play, i.e. Elon Musk has become a polarising figure in both tech and politics, particularly in Europe. For example, Tesla’s EU sales dropped by more than 50 per cent year-on-year last month, with some industry analysts attributing the decline to Musk’s increasingly vocal support of far-right politicians and his role in the Trump administration.
It seems that the backlash may now be spilling over into his other ventures. Grok’s public branding as an unfiltered “truth-seeking” AI has been praised by some users, but in a European context, it risks triggering compliance concerns around hate speech, misinformation, and AI safety.
‘DOGE’ Link
Also, a recent Reuters investigation found that Grok is being quietly promoted within the US federal government through Musk’s (somewhat unpopular) Department of Government Efficiency (DOGE), thereby raising concerns over potential conflicts of interest and handling of sensitive data.
What Are Businesses Doing Instead?
With Grok now off-limits in one in four European organisations, it appears that most companies are leaning into AI platforms with clearer data control options and dedicated enterprise tools. For example, ChatGPT Enterprise and Microsoft’s Copilot (powered by OpenAI’s models) are increasingly popular among large firms for their security features, audit trails, and compatibility with existing workplace platforms like Microsoft 365.
Meanwhile, companies with highly sensitive data are now exploring private GenAI solutions, such as running open-source models like Llama or Mistral on internal infrastructure, or through secured cloud environments provided by AWS, Azure or Google Cloud.
Others are looking at AI governance platforms to sit between employees and GenAI tools, offering monitoring, usage tracking and guardrails. Tools like DataRobot, Writer, or even Salesforce’s Einstein Copilot are positioning themselves not just as generative AI providers, but as risk-managed AI partners.
At the same time, it shows how quickly sentiment can shift. Musk’s original pitch for Grok as an edgy, tell-it-like-it-is alternative to Silicon Valley’s AI offerings found some traction among individual users. But in a business setting, particularly in Europe, compliance, reliability, and reputational alignment seem to matter more than iconoclasm.
Regulation Reshaping the Playing Field
The surge in bans against Grok also reflects a change in how generative AI is being governed and evaluated at the institutional level. Across Europe, regulators are moving to tighten rules on artificial intelligence, with the EU’s landmark AI Act expected to set a global precedent. This new framework categorises AI systems by risk level and could impose strict obligations on tools used in high-stakes environments like recruitment, finance, and public services.
That means tools like Grok, which are perceived to lack sufficient transparency or safety mechanisms, could face even greater scrutiny in the future. European firms are clearly starting to anticipate these regulatory pressures, and adjusting their AI strategies accordingly.
Grok’s Market Position May Be Out of Step
At the same time, the pattern of bans has implications for the competitive dynamics of the GenAI sector. For example, while OpenAI, Google and Microsoft have invested heavily in enterprise-ready versions of their chatbots, with controls for data retention, content filtering and auditability, Grok appears less geared towards business use. Its integration into a consumer social media platform and emphasis on uncensored responses make it an outlier in an increasingly risk-aware market.
Security and Deployment Strategies Are Evolving
There’s also a growing role for cloud providers and IT security teams in shaping how AI tools are deployed across organisations. Many companies are now turning to secure gateways, policy enforcement tools, or in some cases, completely air-gapped deployments of open-source models to ensure data stays within strict compliance boundaries. These developments suggest the AI market is maturing quickly, with an emphasis not only on innovation, but on operational control.
What Does This Mean For Your Businesses?
For UK businesses, the growing rejection of Grok highlights the importance of due diligence when selecting generative AI tools. With data privacy laws such as the UK GDPR still closely aligned with EU regulations, similar concerns around transparency, content reliability and compliance are just as relevant domestically. Organisations operating across borders, particularly those in regulated sectors like finance, healthcare or legal services, are likely to favour tools that not only perform well but also come with clear safeguards, documentation and support for enterprise-grade governance.
More broadly, the story of Grok is a reminder that in today’s AI landscape, branding and ambition are no longer enough. The success of generative AI tools increasingly depends on trust, i.e. trust in how data is handled, how outputs are generated, and how tools behave under pressure. For developers and vendors, that means security, transparency and adaptability must be built into the product from day one. For businesses, it means asking tougher questions before deploying any new tool into day-to-day operations.
While Elon Musk’s approach may continue to resonate with individual users who value unfiltered output or alignment with particular ideologies, enterprise buyers are clearly playing by a different rulebook. They’re looking for stability, accountability and risk management, not provocation. As regulation tightens, that divide is likely to widen.