Потрібна допомога адвоката?

Залишай заявку

Supreme Court: AI-generated answers are not evidence

The Supreme Court has sent a clear signal to the legal community: AI‑generated answers are a tool, not a source of reliable, scientifically proven information and not evidence in court proceedings.

Essence of case No. 925/496/24
In commercial case No. 925/496/24, the city council sought to amend a land lease agreement and recalculate the rent.
The defendant, challenging the courts’ conclusions, tried to rely on answers generated by two AI systems – Grok and ChatGPT – as confirmation of the “correct” literal interpretation of one of the contract clauses.
The Commercial Cassation Court within the Supreme Court upheld the decisions of the lower courts and explicitly stated that the motion to treat AI answers as electronic evidence had been lawfully dismissed.
The Court emphasized that, in this situation, artificial intelligence was used not to assist justice but to cast doubt on already‑formed judicial findings.
Why AI answers are not “reliable information”
The Supreme Court stated directly that AI‑generated answers are not recognized as a source of reliable, scientifically proven information.
The reason is that AI algorithms generate text based on statistical models and available datasets, but they do not guarantee relevance, accuracy or compliance with scientific standards of proof.
Moreover, a system such as ChatGPT bears no responsibility for the content of its answers and does not have the status of a subject who can be examined, cross‑examined or subjected to expert review, as happens with experts or witnesses.
Therefore, AI outputs cannot be equated with expert reports, scientific publications or official positions of competent authorities – and these are the types of sources that courts traditionally accept as proper and admissible evidence.
The role of the court and limits of technology
The panel of judges stressed that technologies must be used only to support and strengthen the rule of law, not to replace judicial discretion.
Decision‑making in court is the exclusive competence of human judges; it cannot be delegated, reassigned or effectively substituted by algorithms, even if those algorithms appear “smart”.
This position is consistent with earlier Supreme Court practice, where reliance on the “position” of ChatGPT was already treated as an abuse of procedural rights when aimed at discrediting existing court decisions.
The Court explicitly warns that uncritical reliance on AI may undermine trust in justice by creating the illusion of a “third instance” – an algorithm that is neither controlled nor accountable to anyone.
What this means for lawyers and businesses
For lawyers, AI can be a useful working tool – for case‑law search, drafting, and structuring arguments – but not a source of “ready‑made legal conclusions” to be cited in court.
Professional responsibility for the legal position lies with the lawyer: using an AI answer does not relieve them of the duty to verify it against legislation, case law and the actual facts of the case.
For businesses, the practical takeaway is that references in disputes solely to the “opinion” of ChatGPT or another AI system do not work as evidence and do not replace expert reports, primary documents or official clarifications issued by state authorities.
On the contrary, demonstrative use of such answers as an “argument” may be perceived by the court as a sign of a weak position or even as an attempt to delay proceedings and create an unnecessary burden for the court.
How to work with AI correctly in law
The Supreme Court’s current line does not ban the technology itself – it merely sets boundaries: AI is an auxiliary tool, not an autonomous source of legal or scientific truth.
Lawyers should use it as an “intellectual search” tool – for ideas, alternative interpretations and draft texts that are then refined on the basis of statutes, official sources and relevant case law.
In documents submitted to court, references should be made not to the chatbot itself, but to specific legal provisions, Supreme Court rulings, governmental clarifications, academic articles or expert opinions that the lawyer has independently identified and verified.
Thus, AI remains a useful “back‑office” assistant for legal professionals but does not become a quasi‑expert or “virtual judge”, which the Supreme Court clearly does not allow.
If you have questions or issues related to the use of artificial intelligence in legal practice, drafting procedural documents or building an evidentiary base in line with the Supreme Court’s current approach, seek professional legal assistance — timely advice will help you choose the right strategy for protecting your rights.

Author – Maksym Bahniuk, Head of the Tax and Customs Law Practice at WINNER Law Firm.

Потрібна допомога адвоката?

Залишай заявку

Scroll to Top