Microsoft Sounds Alarm on Shadow AI Use in UK Workplaces
21 October 2025New research from Microsoft reveals that a majority of UK employees are turning to unauthorised AI tools during the workweek, raising urgent questions about data security, privacy practices, and the need for enterprise oversight of artificial intelligence in the workplace.
The Findings at a Glance
Microsoft’s UK-based study, conducted in October 2025 by Censuswide, found that 71% of surveyed employees have used unapproved AI platforms for work-related tasks, with 51% doing so on a weekly basis. The survey covered over 2,000 workers across industries including healthcare, finance, education, and retail—spanning both private and public sector employers.
How Shadow AI Is Being Used
The research shows employees are using AI tools to write emails (49%), generate presentations or reports (40%), and even perform finance-related tasks (22%). For many, the tools they’re using at work mirror those they already rely on in their personal lives. A significant 28% said their workplace simply doesn’t offer an authorised alternative.
Low Awareness, High Risk
Despite growing use, understanding of the risks appears to be lacking. Only about one-third of respondents expressed concern about entering company or customer data into these tools. Slightly fewer were worried about the impact on their employer’s overall cybersecurity.
Microsoft UK & Ireland CEO Darren Hardman cautioned, “AI is unlocking huge productivity gains, but those gains come with responsibilities. Businesses need to ensure the tools their teams use are fit for purpose and built for business, not casual consumer use.”
The Productivity Driver
Generative AI tools are saving UK employees an average of 7.75 hours per week, according to analysis by Dr Chris Brauer of Goldsmiths, University of London. That translates to roughly £208 billion worth of time annually—explaining why many are eager to adopt these tools, even if it means working around corporate restrictions.
What Is Shadow AI?
Similar to the concept of “shadow IT”, Shadow AI refers to employees using AI platforms not sanctioned by their employer. These tools, like consumer-grade chatbots and writing assistants, often store and learn from user input—posing real concerns around data privacy, intellectual property, and regulatory compliance.
The Business Risks
When employees feed company information into these tools, organisations may lose control over that data. This includes financial documents, client details, or sensitive internal files. Once shared with an AI model, the data could be stored indefinitely, outside the company’s governance framework.
Additionally, using third-party platforms widens the attack surface. More external systems handling internal data increases the chance of phishing, prompt injection, and other cyber threats. Since these tools typically don’t leave an audit trail, compliance becomes difficult, if not impossible, to verify.
Why Employees Still Do It
Despite these risks, employees say the efficiency and output gains outweigh the potential downsides—particularly when company-provided tools fall short. Microsoft’s study shows employee sentiment around AI has improved significantly in 2025, with 57% now feeling confident or optimistic about its use.
A Closer Look at Workplace Behaviour
Earlier studies, like one from Ivanti, show nearly half of office workers use AI tools their employer hasn’t approved—and nearly a third intentionally keep it quiet. The appeal? Getting more done in less time, or gaining a perceived competitive edge, even if it means breaking policy.
Turning Shadow AI Into Strategy
While the instinct may be to ban unauthorised tools, some experts argue this trend is a signpost. If staff are using AI to streamline their work, it may be time to formalise those tools or offer better alternatives. Gartner analysts suggest that monitoring which platforms employees use could help organisations identify useful innovations worth adopting officially.
Next Steps for Employers
Experts recommend three key actions:
- Audit existing use: Use surveys or monitoring tools to understand which AI apps are already being used across teams.
- Educate employees: Develop clear, jargon-free policies that explain acceptable AI use and outline the risks of unauthorised platforms.
- Deploy secure tools: Invest in enterprise-grade AI assistants, like those integrated into Microsoft 365, which include encryption, logging, and access controls.
Industries Most Affected
Shadow AI is most prevalent in fast-moving sectors such as telecoms, sales, finance, media, and engineering—where workloads are high and automation offers a clear advantage. As tools become more capable, usage is expected to spread even further across the UK workforce.
Changing the Culture Around AI
Microsoft’s findings suggest a shift in how employees view AI. Nearly 40% now see it as vital to their company’s success—more than double the number from earlier in the year. Globally, 82% of executives believe 2025 marks a defining moment for AI adoption, with many already embedding AI into routine operations.
What This Means for Your Business
The rise of shadow AI reflects a new frontier in workplace technology. While it brings clear gains in speed and creativity, it also reveals critical gaps in policy and security. If organisations fail to offer approved AI options, employees will continue to look elsewhere—often without realising the risks.
Now is the time for leadership to act. By embracing the productivity potential of AI while putting guardrails in place, businesses can support innovation without compromising control. The key lies in trust, transparency, and giving employees the tools they need to succeed—securely.
As AI becomes an embedded part of everyday work, those organisations that strike this balance will be best positioned to lead in the digital workplace of tomorrow.

