
Most UK employees are now using unapproved AI tools at work every week, according to new Microsoft research, raising fresh questions about security, privacy, and corporate control over artificial intelligence.
What Microsoft Found
Microsoft’s latest UK study reports that 71 per cent of employees have used unapproved consumer AI tools at work, and 51 per cent continue to do so weekly. The research, conducted by Censuswide in October 2025, highlights a growing trend known as “Shadow AI”, i.e., the use of artificial intelligence tools not sanctioned by employers. The (October 2025) Censuswide survey took account of the views of 2,003 UK employees, aged 18 and over. The sample included workers from financial services, retail, education, healthcare, and other sectors, with at least 500 respondents each from large businesses and public sector organisations.
Typical Uses of Shadow AI
According to Microsoft’s study, typical uses of Shadow AI include drafting or replying to workplace communications (49 per cent), preparing reports and presentations (40 per cent), and even carrying out finance-related tasks (22 per cent). Many employees say they turn to these tools because they are familiar or easy to access, with 41 per cent admitting they use the same tools they rely on in their personal lives. Another 28 per cent said their employer simply doesn’t provide an approved alternative.
Limited Awareness of the Risks
It seems that according to the study, awareness of the risks remains limited, which is a key part of the problem. For example, only 32 per cent of respondents said they were concerned about the privacy of customer or company data they enter into AI tools, while 29 per cent expressed concern about the potential impact on their organisation’s IT security.
As Darren Hardman, CEO of Microsoft UK & Ireland, says: “UK workers are embracing AI like never before, unlocking new levels of productivity and creativity. But enthusiasm alone isn’t enough,” and that “Businesses must ensure the AI tools in use are built for the workplace, not just the living room.”
Why It So Much Matters Now
The research reflects a wider cultural change in how employees are using artificial intelligence (AI) to handle everyday tasks. For example, Microsoft estimates that generative AI tools and assistants are now actually saving workers an average of 7.75 hours per week. Extrapolated across the UK economy, that equates to around 12.1 billion hours a year, or approximately £208 billion worth of time saved (according to analysis by Dr Chris Brauer of Goldsmiths, University of London).
That potential productivity boost most likely explains much of the enthusiasm around generative AI. However, it also highlights why workers are bypassing official channels. For example, when the tools provided by employers feel restrictive, employees often reach for whatever gets the job done fastest, even if that means using consumer platforms that fall outside company governance and data protection frameworks.
What Is ‘Shadow AI’?
The term “Shadow AI” is borrowed from “shadow IT”, which is a long-standing issue where employees use unapproved hardware or software without authorisation. In this case, it refers to staff using consumer AI tools such as public chatbots or online assistants to support work tasks. One potential problem with this is that these platforms often store or learn from user input, which may include company or customer data, creating potential security and compliance problems.
Organisations that allow this kind of behaviour to go unchecked, therefore, risk breaching UK data protection laws, regulatory obligations, or intellectual property rights (not to mention giving away company secrets). The British Computer Society (BCS) and other professional bodies have previously warned that shadow AI could expose firms to data leaks, non-compliance, and reputational harm if sensitive material is entered into consumer models.
The Real Risks for Businesses
The main security concern is data leakage, i.e., where employees enter sensitive company information into AI tools that may store or process data outside of approved systems. This could include confidential documents, client details, or financial data. Once that information leaves the organisation’s control, it may be impossible to delete or track, potentially breaching data protection law or confidentiality agreements.
Another issue that’s often overlooked by businesses is attack surface expansion. For example, the more third-party AI tools are used, the greater the number of external systems handling company information. This increases the likelihood of phishing, prompt injection attacks, and other forms of misuse. Also, there is the problem of auditability. When AI tools operate outside an organisation’s infrastructure, they leave no record of what data was used or how it was processed, making compliance monitoring almost impossible.
Earlier this year, a report by Ivanti found that nearly half of office workers were using AI tools that were not provided by their employer, and almost one-third admitted keeping it secret. Some employees even said they used unapproved AI to gain an “edge” at work, while others feared their company might ban it altogether. The study echoed Microsoft’s findings that even sensitive data, such as customer financial information, is being fed into public models.
Why Employees Still Do It
Despite the risks, many employees say they basically rely on consumer AI because it helps them manage workloads and meet rising productivity expectations. Microsoft’s study also found that attitudes towards AI have become far more positive over the course of 2025. For example, 57 per cent of employees now describe themselves as optimistic, excited or confident about AI (up from 34 per cent in January). Also of note, it seems the proportion of workers saying they “don’t know where to start with AI” has dropped from 44 per cent to 36 per cent, while more employees say they understand how their company uses the technology.
For many, the motivation is actually practical rather than rebellious. For example, AI chatbots help draft content, summarise notes, create reports and presentations, or even analyse spreadsheets. When deadlines are tight and workloads are high, these capabilities can make a tangible difference, especially if the employer’s own tools are limited or slow to adopt new technology.
A Balanced View
While much of the discussion has focused on the dangers of shadow AI, some experts suggest it can also be a useful indicator of where innovation is happening inside a business. For example, at the Gartner Security and Risk Management Summit in London, analysts Christine Lee and Leigh McMullen argued that rather than trying to eliminate shadow AI entirely, companies could benefit by identifying which tools employees are already finding valuable. With the right governance and security controls, those tools could be formally adopted or integrated into approved workflows.
In this sense, shadow AI can act as an early warning system for unmet needs. If, for example, marketing teams are using public generative AI tools to create campaign content, that may reveal a gap in internal creative resources or digital support. Security teams could then review those external tools, assess the risks, and replace them with enterprise-grade equivalents that meet the same needs safely.
Gartner’s approach reflects a growing recognition that employees are often ahead of policy when it comes to technology adoption. Turning shadow AI into an opportunity for collaboration, rather than conflict, could help businesses strike a balance between innovation and security.
What Organisations Can Do Next
Analysts and security experts are urging employers to start by improving visibility. That means identifying which AI tools are already being used across the organisation, and for what purposes. With this in mind, many companies are now running staff surveys or using software discovery tools to build a clearer picture of how generative AI is being adopted.
Once the extent of use is known, companies can then focus on education. Clear, accessible policies are essential, i.e., explaining in plain English what kinds of data can be entered into AI tools, what cannot, and why. Training should emphasise the risks of using consumer AI platforms, particularly when handling client, financial, or personal information.
Enterprise Grade Safer
The final step is to offer secure alternatives. Enterprise-grade AI assistants, such as those integrated into Microsoft 365 or other workplace systems, are designed to protect sensitive data and maintain compliance. These tools include encryption, access controls, audit logs, and data-loss prevention measures that consumer apps typically lack. As Microsoft’s Darren Hardman put it: “Only enterprise-grade AI delivers the functionality employees want, wrapped in the privacy and security every organisation demands.”
Where Shadow AI Is Most Common
Microsoft’s data shows that shadow AI use is most prevalent among employees in IT and telecoms, sales, media and marketing, architecture and engineering, and finance and insurance. This is likely to be because these are industries where high workloads, creative output, or data handling make AI assistants especially appealing. As confidence grows and tools become more sophisticated, use across sectors is expected to increase further.
Shaping Culture
The Microsoft research suggests this trend is already reshaping workplace culture. For example, more employees now see AI as an essential part of their organisation’s success strategy, a figure that has more than doubled from 18 per cent in January to 39 per cent in October. Globally, Microsoft’s Work Trend Index reports that 82 per cent of business leaders view 2025 as a turning point for AI strategy, with nearly half already using AI agents to automate workflows.
What Does This Mean For Your Business?
The rise of shadow AI appears to present UK businesses with a clear crossroads between risk and reward. Employees are demonstrating that AI can deliver genuine productivity gains, but their widespread use of unapproved tools exposes gaps in governance and digital readiness. For many organisations, this is not simply a security issue but a sign that workplace innovation is moving faster than policy.
In practical terms, the Microsoft findings suggest that companies which fail to provide secure, accessible AI tools will continue to see staff seek out consumer alternatives. That makes the issue as much about culture and leadership as it is about technology. Building trust through transparency, and ensuring employees understand how and why AI is being managed, will be critical to balancing productivity with protection.
For IT leaders, the challenge now lies in developing frameworks that enable safe experimentation without undermining compliance. That means investing in enterprise-grade AI infrastructure, tightening oversight of data use, and introducing training that connects security policy with real-world tasks. Businesses that achieve this balance will be able to harness AI’s benefits while maintaining control over how it is deployed.
The implications extend beyond individual firms. For example, regulators, industry bodies, and even customers have a stake in how securely AI is used in the workplace. As more sensitive data flows through AI systems, the pressure will grow for clear accountability and transparent governance. The Microsoft findings make it clear that AI adoption in the UK is no longer confined to innovation teams or pilot projects; it is now embedded in everyday work. How organisations respond will determine whether this new era of AI-driven productivity strengthens trust and competitiveness, or exposes deeper vulnerabilities in the digital workplace.