Company Check : Claude In CoPilot & Google Data Commons

Microsoft has confirmed it is adding Anthropic’s Claude models to its Copilot AI assistant, giving enterprise users a new option alongside OpenAI for handling complex tasks in Microsoft 365.

Microsoft Expands Model Choice In Copilot

Microsoft has begun rolling out support for Claude Sonnet 4 and Claude Opus 4.1, two of Anthropic’s large language models, within Copilot features in Word, Excel, Outlook and other Microsoft 365 apps. The update applies to both the Copilot “Researcher” agent, used for generating reports and conducting deep analysis, and Copilot Studio, the tool businesses use to build their own AI assistants.

The move significantly expands Microsoft’s model options. Until now, Copilot was powered primarily by OpenAI’s models, such as GPT‑4 and GPT‑4 Turbo, which run on Microsoft’s Azure cloud. With the addition of Claude, Microsoft is now allowing businesses to choose which AI model they want to power specific tasks, with the aim of offering more flexibility and improved performance in different enterprise contexts.

Researcher users can now toggle between OpenAI and Anthropic models once enabled by an administrator. Claude Opus 4.1 is geared towards deep reasoning, coding and multi‑step problem solving, while Claude Sonnet 4 is optimised for content generation, large‑scale data tasks and routine enterprise queries.

Why Microsoft Is Doing This Now

Microsoft has said the goal is to give customers access to “the best AI innovation from across the industry” and to tailor Copilot more closely to different work needs. However, the timing also reflects a broader shift in Microsoft’s AI strategy.

While Microsoft remains OpenAI’s largest financial backer and primary cloud host, the company is actively reducing its dependence on a single partner. It is building its own in‑house model, MAI‑1, and has recently confirmed plans to integrate AI models from other firms such as Meta, xAI, and DeepSeek. Anthropic’s Claude is the first of these to be made available within Microsoft 365 Copilot.

This change also follows a wave of high‑value partnerships between OpenAI and other tech companies. For example, in recent weeks, OpenAI has secured billions in new infrastructure support from Nvidia, Oracle and Broadcom, suggesting a broader distribution of influence across the AI landscape. Microsoft’s latest move helps hedge against any future change in the balance of that relationship.

Microsoft And Its Customers

The introduction of Claude into Copilot is being made available first to commercial users who are enrolled in Microsoft’s Frontier programme, i.e. the early access rollout for experimental Copilot features. Admins must opt in and approve access through the Microsoft 365 admin centre before staff can begin using Anthropic’s models.

Importantly, the Claude models will not run on Microsoft infrastructure. Anthropic’s AI systems are currently hosted on Amazon Web Services (AWS), meaning that any data processed by Claude will be handled outside Microsoft’s own cloud. Microsoft has made clear that this data flow is subject to Anthropic’s terms and conditions.

This external hosting has raised concerns in some quarters, particularly for organisations operating under strict compliance or data residency requirements. Microsoft has responded by emphasising the opt‑in nature of the integration and the ability for administrators to fully control which models are available to users.

For Microsoft, the move appears to strengthen its claim to be a platform‑agnostic AI provider. By integrating Anthropic alongside OpenAI and offering seamless switching between models in both Researcher and Copilot Studio, Microsoft positions itself as a central point of access for enterprise AI, regardless of where the models originate.

Business Relevance And Industry Impact

The change is likely to be welcomed by business users seeking more powerful or specialised models for specific workflows. It may also create new pressure on OpenAI to continue improving performance and pricing for enterprise use.

From a competitive standpoint, Microsoft’s ability to offer Claude inside its productivity suite puts further distance between Copilot and rival AI products from Google Workspace and Apple’s AI integrations. It also allows Microsoft to keep pace with fast‑moving developments in multi‑model orchestration, the ability to run different tasks through different models depending on context or output goals.

For Microsoft’s competitors in the cloud and productivity space, the integration also highlights a growing interoperability challenge. Anthropic is mainly backed by Amazon, and its models run on both AWS and Google Cloud. Microsoft’s decision to incorporate those models into 365 tools represents a break from traditional cloud loyalty and suggests that, in the era of generative AI, usability and capability may matter more than where the models are hosted.

The Google Data Commons Update

While Microsoft is focusing on model integration, Google has taken a different step by making structured real‑world data easier for AI developers to use. This month, it launched the Data Commons Model Context Protocol (MCP) Server, a new tool that allows developers and AI agents to access public datasets using plain natural language.

The MCP Server acts as a bridge between AI systems and the vast Data Commons database, which includes datasets from governments, international organisations, and local authorities. This means that developers can now build agents that access census data, climate statistics or economic indicators simply by asking for them in natural language, without needing to write complex code or API queries.

The launch aims to address the two long‑standing challenges in AI of hallucination and poor data quality. For example, many generative models are trained on unverified web data, which makes them prone to guessing when they lack information. Google’s approach should, therefore, help ground AI responses in verifiable, structured public datasets, improving both reliability and relevance.

ONE Data Agent

One of the first use cases is the ONE Data Agent, created in partnership with the ONE Campaign to support development goals in Africa. The agent uses the MCP Server to surface health and economic data for use in policy and advocacy work. However, Google has confirmed that the server is open to all developers, and has released tools and sample code to help others build similar agents using any large language model.

For Google, this expands its role in the AI ecosystem beyond model development and into data infrastructure. For developers, it lowers the technical barrier to creating trustworthy data‑driven AI agents and opens up new opportunities in sectors such as education, healthcare, environmental analysis and finance.

What Does This Mean For Your Business?

The addition of Claude to Microsoft 365 Copilot marks a clear move towards greater AI optionality, but it also introduces new complexities for both Microsoft and its enterprise customers. While the ability to switch between models gives businesses more control and the potential for improved task performance, it also means IT teams must assess where and how their data is being processed, especially when it leaves the Microsoft cloud. For some UK businesses operating in regulated sectors, this could raise concerns around data governance, third-party hosting, and contractual clarity. Admin-level opt-in gives organisations some control, but the responsibility for managing risk now falls more squarely on IT decision-makers.

For Microsoft, this is both a technical and strategic milestone. The company is reinforcing its Copilot brand as a neutral gateway to the best models available, regardless of origin. It sends a signal that AI delivery will be less about vendor exclusivity and more about task-specific effectiveness. For competitors, the integration of Anthropic models into Microsoft 365 may accelerate demand for open, composable AI stacks that can handle model switching, multi-agent coordination, and fine-grained prompt routing, especially in workplace applications.

Google’s decision to open up real-world data through the MCP Server supports a different but equally important part of the AI ecosystem. For example, many UK developers struggle to ground their AI agents in reliable facts without investing heavily in custom pipelines. The MCP Server simplifies this process, making structured public data directly accessible in plain language. If adopted widely, it could help reduce hallucinations and increase the usefulness of AI across sectors such as policy, healthcare, sustainability, and finance.

Together, these announcements suggest that the next phase of AI will be shaped not only by which models are most powerful, but also by who can offer the most useful data, the clearest integration paths, and the most practical tools for real-world business use. For UK organisations already exploring generative AI, both moves offer new possibilities, but also demand closer scrutiny of how choices around models and data infrastructure will affect operational control, user trust, and long-term value.