Courts across multiple common-law jurisdictions have, over the past 18 months, delivered a series of significant decisions addressing the use of artificial intelligence in legal practice.
A clear pattern has emerged that tribunals are increasingly prepared to scrutinise how parties deploy AI and enforce obligations of confidentiality, privilege and professional responsibility.
Two issues dominate recent judicial commentary:
- Loss of privilege through the use of open-source (i.e., not confidential) AI tools; and
- The submission of hallucinated or non-existent authorities.
Privilege
United Kingdom
The Upper Tribunal (Immigration and Asylum Chamber) in UK v Secretary of State for the Home Department [2026] UKUT 81 (IAC) has issued the UK’s first decision addressing privilege and AI. By way of refresher, legal advice privilege in England and Wales protects (i) confidential communications; (ii) between a lawyer and client (or authorised agent); (iii) for the purpose of obtaining or giving legal advice. Litigation privilege applies where material is created for the dominant purpose of reasonably contemplated litigation. The Tribunal held that uploading of confidential client documents by the solicitors to open source AI platforms, such as ChatGPT, places those documents in the public domain, and therefore in breach of client confidentiality. Because confidentiality is lost, no claim to privilege could be maintained. Closed source tools that do not re-use inputs (e.g. private enterprise Microsoft Copilot) were distinguished as less problematic, however lawyers remain responsible for ensuring confidentiality is preserved.
The Tribunal also emphasised that while specialist AI tools can assist disclosure review and legal research, lawyers must exercise vigilance; risks include inadvertent disclosure of confidential materials.
United States
US courts similarly are beginning to delineate the boundaries of confidentiality and privilege in the context of AI-generated material. A key recent example is United States v. Heppner, No. 25 Cr. 503 (JSR) (S.D.N.Y. Feb. 17, 2026).
In that case the Judge granted the Government’s motion for a ruling that the defendant’s written exchanges with the AI platform, Claude, were not privileged. The decision addressed a question of first impression nationwide: whether, when a user communicates with a publicly-available AI platform in connection with a pending criminal investigation, are the AI user’s communications protected by attorney-client privilege or the work product doctrine? The Court’s answer was “no”.
Attorney-client privilege in the US is similar to legal advice privilege, it requires communications (1) between a client and his or her attorney, (2) that are intended to be and in fact were kept confidential, and (3) for the purpose of obtaining or providing legal advice. All three elements must be present.
Applying these principles, the Court held that the AI documents lack two, if not all three elements, of attorney-client privilege:
- Claude is not an attorney, and attorney-client privilege only protects communications between a client and their attorney (or authorised agent)
- The communication was not confidential. Anthropic’s privacy policy expressly states that it collects both user inputs and AI outputs, uses them for training and may disclose them to regulators and other third parties. Users therefore “voluntarily disclose” their conversation, defeating confidentiality.
- The defendant did not communicate with Claude for the purpose of obtaining legal advice, nor did he communicate with the tool at the direction or suggestion of counsel.
The position in relation to the applicability of the work product doctrine – which “shelters the mental processes of the attorney, providing a privileged area within which he can analyse and prepare his client’s case” – is less clear.
In Heppner, the Court held that the AI documents were not protected by the work-product doctrine because:
- They were not prepared by or at the direction of counsel; they were created entirely at the defendant’s own initiative.
- They did not reflect counsel’s strategy at the time they were created.
However, in a civil case in the Eastern District of Michigan (Warner v Gilbarco Inc No. 2:24-cv-12333 (E.D. Mich. Feb. 10, 2026)), the Court found that a self-represented litigant had properly asserted privilege over materials generated by ChatGPT. The Court reasoned that using AI tools to prepare legal materials is analogous to traditional work product protected activities and rejected the argument that employing generative AI such as ChatGPT amounted to a waiver of work product protection. The Court observed that waiver of work product protection required production to an adversary or one that makes it likely the materials will reach the hands of an adversary, such that “ChatGPT (and other generative AI programs) are tools, not persons”. Given that the party was self-represented, this appears more akin to a lawyer using AI to prepare a case for litigation.
Hallucinations
United Kingdom
England has also seen a steady stream of cases discussing AI hallucination. The High Court in Ayinde v London Borough of Haringey commented that:
“Freely available generative artificial intelligence tool, trained on a large language model such as ChatGPT are not capable of conducting reliable legal research. Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect. The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source.” (emphasis added)
The Ayinde decision makes clear that filing documents containing AI generated, non-existent authorities is unacceptable. The Court described such conduct as “wholly improper” and potentially subject to disciplinary referral.
The Upper Tribunal, in the above referenced UK v Secretary of State, also made comments on hallucinated material as: (a) a waste of the limited resources of the Tribunal; and (b) in reference to the duty of lawyers and the role of supervision, stated:
“The citation of cases which do not exist sends that judge on a fool’s errand. The time spent on such an errand is at the expense of other judicial business and is not in the interests of justice.
[…]
We emphasise the primary duty of regulated lawyers is to the Court and Upper Tribunal and to the cause of truth and justice. That duty is not discharged by professional representatives who knowingly or recklessly place false information before the Tribunal, or who fail to supervise work undertaken by other members of their firm for whom they are responsible.
[…]
Any practitioner who uses non-specialist AI to undertake research or drafting is obliged to undertake rigorous checks to ensure that any information gleaned from those sources is true and accurate. Anyone with responsibility for legal practice at a firm of solicitors or regulated legal advisers, must be aware of those pitfalls and of the need to warn staff about the dangers of using non-specialist AI.”
In an Addendum to its decision in Choksi v IPS Law LLP [2025] EWHC 2804 (Ch), the Court commented on references to a number of cases that had incorrect citations, wrong names, or which simply did not exist. The Court reinforced the very clear warning from the Administrative Court in Ayinde about the misuse of AI and the misleading citation of authorities.
In D (A Child) (Recusal) [2025] EWCA Civ 1570, the Court of Appeal – when absolving the litigant of any intention to mislead the Court by the use of false citations – commented (at [83]):
“… Used properly and responsibly, artificial intelligence can be of assistance to litigants and lawyers when preparing cases. But it is not authoritative or infallible body of legal knowledge. There are growing reports of “hallucinations” infecting legal arguments through citations of cases for propositions for which they are not authority and, in some instances, the citation of cases which do not exist at all. At worst this may lead to other parties and the court being misled. In any event, it means that extra time is taken and costs are incurred in cross checking and correcting errors. All parties – represented and unrepresented – owe a duty to the court to ensure that cases cited in legal argument are genuine and provide authority for the proposition advanced.”
The Bar Council has reinforced this position, calling out the “serious implications” for the administration of justice where practitioners rely uncritically on generative AI. The Law Society has also issued guidance on generative AI in response to the new risks it creates.
New Zealand
Like the UK, the New Zealand courts have confronted instances of submissions citing non-existent AI generated case law.
In Wikeley v Kea Investments Ltd [2024] NZCA 609 the Court of Appeal made note of the use of generative artificial intelligence in the Appellant’s original memorandum, evident by the references to apparently non-existent cases. It reminded the parties of the guidance issued by the judiciary on the use of generative AI in the Courts (discussed below).
Further, the New Zealand Supreme Court in Jones v Family Court at Whangārei [2026] NZSC 1 when declining leave to appeal noted that the applicant “cited a number of authorities which appear to have been hallucinated by an Artificial Intelligence (AI) application.” The Supreme Court said:
“Misuse of AI in legal proceedings has serious implications for the administration of justice and public confidence in the justice system. Persons filing submissions in court must ensure all authorities referred to are genuine and correctly cited…. Reliance on false citations, including the unverified outputs of AI applications may in serious cases amount to obstruction of justice or contempt of court”
New Zealand’s judiciary has issued guidelines for judges, lawyers and non-lawyers addressing risks associated with AI, including hallucinations, bias, confidentiality and ethical responsibilities. These guidelines, while non-binding, signal an institutional expectation that AI use must align with the established duties to the Court.
Conclusion
AI is already influencing multiple stages of litigation, including discovery, document management, and legal research. The role AI plays in litigation is only likely to continue with:
- the growth in AI tools specifically tailored to legal practice;
- the prevalence of AI-assisted disclosure tools;
- the need to manage AI‑assisted evidence preparation; and
- adoption of various AI tools by law firms and legal practices for drafting and other tasks.
While judicial decisions naturally focus on risk, there is a growing acknowledgment of benefits of responsible AI adoption including:
- Efficiency and Speed – AI tools can accelerate document review, assist with workflow and improve drafting efficiency. Courts are increasingly willing to recognise the high volume of data which has to be processed for disclosure and the use of AI may become necessary to conduct proportionate review.
- Access to Justice – AI tools, when used responsibly, can also help self-represented litigants understand the procedures and court process. They can act as a valuable information source and sounding board for such litigants, when used appropriately.
- Enhanced Quality Control – when used with appropriate supervision, AI-assisted review platforms can reduce false positives and lead to more accurate disclosure.
The recent cases highlight not only the court’s growing willingness to scrutinise how AI is used, but also the practical steps which individuals and organisations should be taking. For clients, the question is no longer whether AI will feature in litigation workflows, but how to integrate it safely. The key consideration is the protection of confidentiality and privilege. Public AI tools pose real risks to privilege and confidentiality. Any input that may contain confidential, privileged or strategically sensitive information should be carefully controlled. Clients should ensure:
- Internal policies which expressly prohibit the use of open source AI tools such as Claude and Chat GPT for documents containing confidential or privileged material
- Privileged material is only reviewed with closed source, private enterprise tools with robust contractual privacy protections, so as to maximise the prospect that confidentiality is maintained.
- They and their employees understand that uploading documents to open source or otherwise public AI platforms, or interrogating or discussing legal advice or case strategy with an AI platform , may result in a loss of confidentiality, and hence any claim to privilege, with adverse consequences in litigation.
- They discuss with their lawyers in advance any contemplated use of AI in relation to a dispute, and likely defer to those lawyers to direct any use of AI – particularly while this area of law continues to develop.
Courts are no longer treating AI misuse as an outlier. AI is frequently becoming a topic of judicial thought and discussion on core legal principles: privilege, confidentiality and professional duties. For clients, AI can deliver significant efficiencies, but only where deployed within a structured framework that preserves legal protections and avoids litigation risk.
The use of AI in the legal profession will continue to grab the attention of court and lawmakers as they grapple with the role of AI in an ever-expanding variety of capacities including: detailed evidence regarding parties’ use of AI in the context of disputes (Fortis Advisors, LLC v Krafton, Inc., C.A. No. 2025-0805-LWW (Del. Court of Chancery, March 16, 2026)); an effort by the New York state legislature to ban AI chatbots from posing as lawyers; and the landmark case of Nippon Life Insurance Company of America v OpenAI Foundation and OpenAI Group (Case: 1:26-cv-02448) brought in the Illinois courts in March 2026 by an insurance company against Open AI alleging that ChatGPT engaged in the unauthorised practice of law. It is definitely a space to watch.