In a landmark ruling, the UK’s First-tier Tribunal has ordered HMRC to confirm whether artificial intelligence (AI) has been used to evaluate and reject research and development (R&D) tax relief claims. The case, brought forward by R&D tax specialist Thomas Elsbury, underscores growing industry concern that automated tools may be influencing decisions without sufficient oversight.
Elsbury argued that tell-tale signs in HMRC correspondence, including American spellings, unusual punctuation, and letters that appear formulaic or disconnected from facts, suggested AI involvement. His concerns reflect a broader industry fear: that legitimate R&D claims are being denied due to errors, misjudgment, or unauthorised use of generative AI.
The Tribunal’s decision overturns earlier refusals by both HMRC and the Information Commissioner’s Office (ICO), concluding that the public’s right to transparency outweighs HMRC’s concerns about fraud. HMRC now faces a deadline of 18 September 2025 to disclose details of its AI usage in tax relief assessments.
Growing Concerns Over AI in Tax Administration
Elsbury, who has built a business around guiding clients through R&D tax claims, noticed patterns in HMRC’s correspondence that raised red flags. Many letters contained Americanised spellings and the overuse of the “em-dash” stylistic markers unusual in official UK government communication.
More concerning was that some rejections appeared to be factually inconsistent, framing arguments in a way that seemed “convincing but detached from reality.” This fuelled speculation that automated tools, rather than human caseworkers, were generating these decisions.
Industry Pushback Against Oversealous Scrutiny
Tax advisors Adam Craggs and Alexis Armitage of RPC highlighted that R&D relief claims have come under increasing HMRC scrutiny. They warned that compliance efforts risk becoming oversealous, potentially penalising legitimate businesses.
They further stressed that taxpayers already face distrust and confusion. If AI tools are being deployed without adequate transparency, confidence in HMRC’s processes could collapse.
The Legal Battle: FOI Requests and Tribunal Ruling
In December 2023, Elsbury submitted a Freedom of Information Act (FOIA) request seeking confirmation of HMRC’s use of AI. HMRC refused, citing concerns that disclosure could assist fraudulent claimants. The ICO upheld HMRC’s refusal when Elsbury appealed.
Undeterred, Elsbury escalated the case to the First-tier Tribunal’s General Regulatory Chamber, which ruled in his favour. The Tribunal criticised HMRC’s refusal to provide clarity, warning that it exacerbates taxpayer mistrust and may deter genuine R&D claims.
The Tribunal’s ruling explicitly stated that public interest in transparency outweighs potential risks. HMRC was given until 18 September 2025 to comply.
Wider Implications for AI in Tax Decision-Making
This ruling has wider implications for AI in tax administration. HMRC’s Transformation Roadmap, published just last month, confirmed that the agency plans to deploy AI-powered tools more extensively in compliance and enforcement.
Tax dispute advisors argue that this case sets a precedent. If HMRC decisions appear formulaic, automated, or lacking proper reasoning, taxpayers may now challenge them on grounds of insufficient human oversight or data protection violations.
HMRC and ICO Responses
The ICO confirmed it will not appeal the Tribunal’s decision, while HMRC stated it is “reviewing the ruling and considering its position.” The agency has yet to clarify whether it has indeed used AI in assessing R&D relief claims.
For Elsbury and many in the industry, the outcome is clear: without transparency, trust in HMRC’s decision-making will erode further.
Final Summary
The Tribunal’s decision in Elsbury v Information Commissioner (2025 UKFTT 915 GRC) is a watershed moment for transparency in UK tax administration. By compelling HMRC to disclose its use of AI in rejecting R&D claims, the ruling highlights the delicate balance between compliance, technology, and public trust.
If HMRC confirms AI involvement, it could ignite broader debates about automation in tax enforcement and the adequacy of human oversight in AI-driven systems. For taxpayers, the ruling may open new avenues to challenge decisions that seem inconsistent or generated without clear reasoning.
More broadly, the case underscores a pressing need: as AI becomes embedded in government decision-making, agencies must ensure clarity, accountability, and fairness remain at the forefront.