Barrister Chowdhury Rahman Faces Disciplinary Probe After Judge Rules He Used ChatGPT for Fictitious and Irrelevant Legal Submissions

The legal profession in the United Kingdom has been rocked by revelations that an immigration barrister, Chowdhury Rahman, allegedly used generative artificial intelligence tools, including ChatGPT, to prepare legal arguments that contained fictitious case law and misleading references. The incident, which came to light during an asylum appeal hearing before the Upper Tribunal, has drawn sharp criticism from the presiding judge and prompted discussions about the risks of unverified AI use in legal proceedings.

Judge Mark Blundell, who oversaw the appeal, accused Chowdhury Rahman of wasting the court’s time and attempting to conceal his reliance on AI-generated content. The case has now raised the prospect of a formal disciplinary investigation by the Bar Standards Board (BSB), with implications that extend far beyond a single barrister’s conduct and into the heart of the profession’s relationship with artificial intelligence.

A Case Built on Fictional Authorities

The controversy erupted during an asylum appeal involving two Honduran sisters who claimed to have fled their country after being targeted by the notorious criminal gang Mara Salvatrucha, better known as MS-13. The women, who arrived at Heathrow Airport in June 2022, alleged that gang members wanted them to become “their women” and had threatened to kill their family members after they refused. Their asylum application was initially rejected by the Home Office in November 2023, with officials citing inconsistencies in their testimony and a lack of documentary evidence to substantiate their claims.

When the case reached the Upper Tribunal, Chowdhury Rahman represented the sisters and argued that the first-tier tribunal judge had made multiple errors — including misjudging the credibility of the claimants, failing to properly evaluate the evidence, and neglecting to consider whether internal relocation within Honduras was viable. However, as Judge Blundell examined Mr Rahman’s written submissions, he found that several of the cases cited as legal authorities either did not exist or were completely irrelevant to the issues at hand.

In his ruling, Judge Blundell expressed disbelief at the nature of the submissions, writing that “nothing said by Mr Rahman orally or in writing establishes an error of law on the part of the judge.” He further noted that out of the twelve authorities referenced, some were wholly fictitious while others bore no connection to the principles Mr Rahman claimed they supported. The judge remarked that he had been forced to waste considerable time attempting to locate and verify the purported case law, only to discover that it was fabricated.

Crucially, Judge Blundell identified striking similarities between one of the cited authorities and a previously reported instance in which ChatGPT had generated an entirely made-up legal citation. The resemblance led him to suspect that Mr Rahman had relied on a generative AI tool to prepare his appeal grounds without cross-checking the results. According to the judgment, Mr Rahman “appeared unfamiliar” with established legal research tools and was “consistently unable to grasp” how to direct the court to the relevant portions of the cases he cited. His inability to explain or defend the sources of his citations compounded the tribunal’s doubts.

Read : These Are the Top Ten Most Famous Destinations Suggested by ChatGPT Around the World

When questioned, Chowdhury Rahman admitted that he had used “various websites” to conduct his legal research but did not specify which ones. Judge Blundell, noting the implausibility of Mr Rahman’s ignorance, concluded that it was “overwhelmingly likely” he had employed generative AI, such as ChatGPT, to formulate his arguments — and that he had attempted to hide this fact during the proceedings.

Judicial Rebuke and Professional Implications

Judge Blundell’s postscript to the ruling delivered an unusually forceful rebuke. He described Chowdhury Rahman’s submissions as “wholly misleading” and said that the barrister’s conduct had resulted in a “waste of the tribunal’s time.” The judge’s remarks were not limited to the poor quality of the research; they extended to questions of honesty, competence, and professional integrity. “He has been called to the Bar of England and Wales,” the judge wrote, “and it is simply not possible that he misunderstood all of the authorities cited in the grounds of appeal to the extent that I have set out above.”

Such language carries serious weight within the legal profession, where barristers are expected to uphold the highest standards of accuracy and candour. Misleading a tribunal, even inadvertently, can constitute professional misconduct. Attempting to conceal the source of one’s legal arguments — particularly if those arguments are founded on fabricated material — represents an even graver ethical breach.

Read : 13-Year-Old Ian Franco Arrested After Asking ChatGPT ‘How to Kill My Friend in the Middle of Class’

As a result, Judge Blundell announced that he was considering referring the matter to the Bar Standards Board, which oversees the regulation of barristers in England and Wales. If the BSB initiates an investigation, Chowdhury Rahman could face sanctions ranging from reprimand to suspension or even disbarment, depending on the findings.

The case has reignited debate about the responsibilities of legal professionals in an age where AI tools can produce convincing but unreliable information. While generative AI platforms such as ChatGPT are increasingly being used in law for drafting and summarisation tasks, the episode serves as a stark reminder that these tools are not substitutes for verified legal research. AI models, trained on large datasets, can sometimes produce “hallucinations” — confidently stated but entirely fictional responses. In a legal context, such hallucinations can have devastating consequences, potentially misleading courts and undermining justice.

The incident mirrors several recent controversies in other jurisdictions. In 2023, a U.S. attorney was sanctioned after submitting a legal brief containing nonexistent cases generated by ChatGPT, prompting American courts to issue warnings about the use of unverified AI sources. Judge Blundell’s remarks suggest that similar caution may now be necessary in the UK, where no formal regulatory framework yet governs AI use by lawyers.

The Broader Reckoning Over AI in Legal Practice

Beyond the disciplinary proceedings facing Chowdhury Rahman, this episode has become a touchstone for a broader reckoning within the legal community. The profession is increasingly confronting the ethical and practical challenges posed by artificial intelligence, especially in areas requiring meticulous accuracy and accountability. The fact that a barrister, trained in legal reasoning and evidence, could allegedly present AI-generated fiction before a tribunal has unsettled both practitioners and regulators.

Legal experts have warned that while AI can enhance efficiency, its misuse can erode trust in the justice system. The strength of judicial reasoning depends on the authenticity and reliability of cited authorities; once that foundation is compromised, the integrity of proceedings is called into question. Barristers are officers of the court, bound by strict duties to ensure that every assertion made before a judge is supported by verifiable evidence or precedent. The appearance of fabricated material in formal pleadings, therefore, strikes at the core of professional ethics.

Moreover, the incident underscores a growing skills gap between technological innovation and professional oversight. Judge Blundell’s observation that Chowdhury Rahman appeared “unfamiliar” with proper legal research tools reflects a problem that extends to many parts of the profession. Traditional methods, relying on databases such as LexisNexis and Westlaw, require training and discipline, whereas generative AI offers instant but unverified results. The temptation to rely on the latter — especially under time pressure — is significant, yet the risks are profound.

Regulators are now being urged to issue clearer guidance. The Bar Standards Board and the Law Society may soon be compelled to set explicit parameters on the permissible use of AI in legal work. Potential reforms could include mandatory disclosure of AI use in drafting legal documents, compulsory verification of all AI-generated content, and continuing professional development (CPD) requirements focusing on digital literacy.

The case has also sparked philosophical questions about the nature of professional responsibility in an AI-assisted world. Should ignorance of AI’s limitations constitute negligence? Should reliance on machine-generated text be treated as misconduct if it leads to misleading outcomes? These are questions that legal regulators must now grapple with urgently.

The debate extends beyond ethics to public confidence. The judiciary’s authority depends in part on the perceived competence and honesty of advocates. When barristers submit fictitious material, even inadvertently, it risks undermining that confidence. The public expects — and is entitled to — legal representation grounded in diligence and integrity, not the shortcuts of unverified technology.

For many practitioners, the incident will serve as a cautionary tale. It illustrates how reliance on AI, without rigorous human oversight, can rapidly spiral into professional disaster. What begins as a convenient drafting aid can end as a disciplinary inquiry, with reputations and careers at stake.

The profession’s response will likely determine how future generations of lawyers balance innovation with accountability. The integration of AI into legal research is inevitable, but so too is the necessity for safeguards. Courts and regulatory bodies must move swiftly to set standards that protect both clients and the judicial process from the distortions of artificial intelligence.

In the meantime, Chowdhury Rahman’s case stands as a vivid warning. If proven, it represents not merely a lapse in judgment but a fundamental breach of the duty of candour owed to the court. As the Bar Standards Board considers its next steps, the outcome will be watched closely — not only for its consequences to one barrister’s career but for what it signals about the future boundaries of legal practice in the age of AI.

9 thoughts on “Barrister Chowdhury Rahman Faces Disciplinary Probe After Judge Rules He Used ChatGPT for Fictitious and Irrelevant Legal Submissions”

Leave a Comment

Discover more from Earthlings 1997

Subscribe now to keep reading and get access to the full archive.

Continue reading