
It’s been reported that international law firm Hill Dickinson has introduced new restrictions on the use of artificial intelligence (AI) tools following a sharp increase in staff engagement with the technology.
What Happened?
The development was first reported by the BBC after it allegedly obtained an internal email from Hill Dickinson’s senior management. The email reportedly revealed that the firm had identified a “significant increase in usage” of AI tools by employees, prompting a review of its policies and subsequent restrictions on access. It seems that the move may have been prompted by growing industry concerns over data security, compliance, and the ethical implications of AI in legal work.
The Email
According to reports about the data cited in the email, in just one week between January and February 2024, Hill Dickinson staff recorded over 32,000 interactions with the AI chatbot ChatGPT, 3,000 with the Chinese AI service DeepSeek, and nearly 50,000 with the writing assistance tool Grammarly. While these figures may illustrate widespread engagement, they don’t clarify how many individuals were actually using the tools or how frequently they returned, as each use could generate multiple interactions.
Limited General Access To The Tools
In response, it’s been reported that the firm has now limited general access to such tools, introducing a request-based approval system to monitor and regulate AI usage more closely. It seems that the internal communication may have highlighted that much of the AI use may not have been in line with the firm’s AI policy, thereby perhaps necessitating stricter oversight.
Why Impose These Restrictions?
It seems that the firm’s AI policy (implemented back in September 2024) actually prohibits employees from uploading client information to AI platforms and requires them to verify the accuracy of AI-generated content. The recent spike in AI engagement may have, therefore, raised concerns that these guidelines were not being strictly followed, potentially exposing the firm to regulatory and security risks.
A spokesperson for Hill Dickinson has been quoted clarifying its stance, stating: “Like many law firms, we are aiming to positively embrace the use of AI tools to enhance our capabilities while always ensuring safe and proper use by our people and for our clients.”
Not An Outright Ban
The firm maintains that it is not banning AI outright but ensuring its application is controlled and compliant. It has already received and approved some individual AI usage requests under the new system.
Broader Industry Implications
The legal profession does appear to be facing a bit of an increasing dilemma over AI adoption. For example, while AI has the potential to streamline tasks such as legal research, contract analysis, and document drafting, it also presents risks related to data security, accuracy, and ethical considerations.
Enter The ICO
Now the UK’s Information Commissioner’s Office (ICO) has weighed in on the debate, warning against excessive restrictions. A spokesperson for the ICO stated: “With AI offering people countless ways to work more efficiently and effectively, the answer cannot be for organisations to outlaw the use of AI and drive staff to use it under the radar. Instead, companies need to offer their staff AI tools that meet their organisational policies and data protection obligations.”
AI Can Help, But Needs Oversight
The Law Society of England and Wales has emphasised its view that AI has potential benefits, with its chief executive, Ian Jeffery, saying: “AI could improve the way we do things a great deal.” However, he also stressed that AI tools require human oversight and that legal professionals must adapt to their responsible use.
Concerns About A Lack of Expertise
Meanwhile, the Solicitors Regulation Authority (SRA) has expressed concerns about an apparent general lack of digital expertise in the legal sector. For example, a spokesperson was recently quoted as warning that “despite this increased interest in new technology, there remains a lack of digital skills across all sectors in the UK. This could present a risk for firms and consumers if legal practitioners do not fully understand the new technology that is implemented.”
This highlights a broader challenge for the legal industry, i.e. embracing AI innovation while ensuring legal professionals are adequately trained and aware of the risks.
Mixed Reactions
Reports of Hill Dickinson’s approach have drawn mixed reactions. Some industry figures argue that overly strict AI regulations could stifle innovation and slow the adoption of technologies that could make legal work more efficient.
Others point out that firms must proceed with caution, particularly regarding data privacy and regulatory compliance. High-profile cases of data breaches linked to AI use have reinforced concerns about inadvertently exposing confidential client information to external platforms.
Not An Isolated Case
It should be noted here that the reported move by Hill Dickinson is certainly not an isolated case. For example, other major corporations (including Samsung, Accenture, and Amazon) have also implemented restrictions on AI tools over concerns about data security and the potential for AI-generated content to be unreliable or misleading.
The Legal Sector Needs To Find A Balance
AI’s increasing presence in the legal world is undeniable, and firms may now be tasked with finding the right balance between harnessing its benefits and mitigating its risks. Hill Dickinson’s decision highlights a broader industry trend of cautious AI integration, ensuring that its use remains secure, ethical, and compliant with professional standards.
What Does This Mean For Your Business?
The reported move by Hill Dickinson to restrict general AI access highlights a growing tension within the legal sector between technological advancement and regulatory caution. AI undoubtedly holds transformative potential, offering efficiencies in legal research, contract analysis, and document drafting. However, its use comes with inherent risks, particularly in an industry where confidentiality, accuracy, and compliance are paramount.
The firm’s reported decision to implement a request-based approval system reflects an industry-wide concern about data security, regulatory obligations, and ethical considerations. While this is not an outright ban, it does indicate that unregulated AI usage in professional settings remains a real concern. It seems that the spike in AI interactions may have signalled that existing policies were not being strictly adhered to, thereby prompting a need for greater oversight. Such caution is understandable, given the possible risks associated with AI-generated inaccuracies or inadvertent data leaks.
At the same time, broader industry voices, including the ICO and the Law Society, have warned against overly restrictive measures that could stifle innovation. Their position suggests that rather than banning AI, firms should focus on implementing clear, structured policies that allow for responsible usage while maintaining compliance with legal and data protection standards. The Solicitors Regulation Authority’s concerns about a lack of digital expertise in the sector further highlight that law firms must not only regulate AI usage but also ensure that legal professionals are adequately trained in its application.
Hill Dickinson’s approach is not just a legal sector issue and, in fact, it has far-reaching implications for businesses of all sizes across the UK. Many large corporations, such as Samsung and Amazon, have already imposed AI restrictions, reflecting wider concerns about security, compliance, and the reliability of AI-generated content. However, for smaller businesses that lack dedicated legal or IT departments, these challenges could be even more pressing. Without clear guidance or internal expertise, SMEs risk either underutilising AI and missing out on its benefits or adopting it without proper safeguards, exposing themselves to potential legal and reputational risks.
This highlights the need for a balanced, industry-wide approach to AI governance. Government agencies and industry bodies may need to step in to provide clearer guidance. Hill Dickinson’s move is far from isolated, as many large corporations have taken similar steps to control AI’s integration into their workflows.