However, the report, released on Tuesday, made several warnings about the use of the technology.
“The use of AI throughout society continues to increase, and so does its relevance to the court and tribunal system,” wrote Lady Chief Justice of England & Wales Baroness Carr of Walton-on-the-Hill, who co-authored the report.
“All judicial office holders must be alive to the potential risks,” she added.
‘Jolly Useful’
Last year, Lord Justice Birss told a conference, reported in the Law Gazette that he used the chatbot Chat GPT when he was summarising an area of law he was already familiar with, and that it was “jolly useful.”
He added: ‘I think what is of most interest is that you can ask these large language models to summarise information. It is useful and it will be used and I can tell you, I have used it.”
The report said that with legal research, AI tools “are a poor way of conducting research to find new information you cannot verify independently.” It added that the current public AI chatbots “do not produce convincing analysis or reasoning.”
It also warned that AI bots make up fictitious cases, citations or quotes, or refer to legislation, articles or legal texts that do not exist. They can also provide “incorrect or misleading” information regarding the law or how it might apply.
It said that AI tools are capable of summarising large bodies of text, though “care needs to be taken to ensure the summary is accurate.”
The guidance revealed that unrepresented litigants are increasingly using AI chatbots.
They said that these may be the “only source of advice or assistance some litigants receive,” though many won’t be able to independently to verify the legal information provided by chatbots and may not be aware that they are prone to error.
In one case, a person without a lawyer tried to present fictitious submissions in court based on answers provided by the ChatGPT chatbot, according to the law Firm Peter Lynn and Partners, who wrote a blog post on the subject.
‘Compromised’
Last year, the House of Lords raised concerns about the use of artificial intelligence technologies in the criminal justice system as a “potential risk to the public’s fundamental human rights and civil liberties.”
It said that it said a lack of minimum standards, transparency, evaluation and training in AI technologies meant that the public’s human rights and civil liberties could be “compromised.”
They said they uncovered “a landscape, a new Wild West, in which new technologies are developing at a pace that public awareness, government and legislation have not kept up with.”
“Facial recognition may be the best known of these new technologies but in reality there are many more already in use, with more being developed all the time. Algorithms are being used to improve crime detection, aid the security categorisation of prisoners, streamline entry clearance processes at our borders and generate new insights that feed into the entire criminal justice pipeline” the committee said.