The Civil Justice Council (CJC) has published its interim report and consultation paper on the Use of AI in Preparing Court Documents. The consultation covers the full spectrum of documents prepared by legal representatives, including the preparation of expert witness reports.
These proposals follow evidence that AI tools are already being used widely by experts and legal teams. lawgazette.co.uk
Proposed Framework
The CJC makes clear that the proposed framework does not extend to:
- litigants in person using AI independently
- judges using AI
- AI applied only for non‑substantive administrative tasks (such as grammar checking, formatting, or transcription)
The report explicitly states that these areas fall outside the focus of the consultation.
Expert Reports: A Shift Towards Mandatory Transparency
Expert witness report reports are included, following an amendment to the working group’s Terms of Reference. The CJC notes the growing use of AI in expert practice and highlights the risks of inaccurate or unverified AI‑generated material being introduced into proceedings.
Research within the interim report shows that 20% of expert witnesses have used AI in their work, including drafting or summarising material for expert reports. This follows a widely discussed U.S. case (Kohls v Elison, 2025) in which an expert inadvertently included AI‑generated misinformation in their report, an outcome the CJC is keen to prevent in England and Wales.
To address this, the working group proposes a significant change:
Experts would be required to disclose any use of AI, except where it is limited to administrative functions such as transcription or basic formatting. This disclosure must include what AI tools were used and how they were used in preparing the report.
This would represent an amendment to the existing statement of truth required under Practice Direction 35, adding a new obligation specifically addressing AI involvement.
Why This Matters for Expert Witnesses
The CJC stresses that the integrity of expert evidence depends on the expert’s independence and methodological accuracy, noting that AI‑generated analysis, if not transparently explained, risks obscuring the reasoning behind an expert’s conclusions or introducing errors that may go undetected by the court or opposing experts. It also warns that undisclosed AI use could create an uneven playing field, particularly where one expert relies on AI to enhance, summarise or generate parts of a report while another does not, making transparency essential to fairness.
In addition, the report highlights that large language models are prone to hallucination, producing fabricated facts, authorities or scientific claims, and that such material, even if included inadvertently, could mislead the court and potentially expose the expert to criticism or sanctions.
Key Takeaways for Expert Witnesses
- AI use is not prohibited, but undisclosed use will be.
- The CJC expects experts to be explicit about what AI tools they used and for what purpose.
- Administrative uses (transcription, formatting, spelling/grammar) do not require disclosure.
- Any AI that generates or alters substantive content must be identified.
- If adopted, these proposals will impact how experts prepare, record and justify their evidence.
Consultation Timeline
The consultation is open for responses until 14th April 2026, and experts are strongly encouraged to participate. As the group notes, expert evidence is uniquely vulnerable to the risks posed by generative AI, and uniquely influential in judicial decision‑making.
You can read the report in full here.