The term “Artificial Intelligence” is hardly new. It’s been around since the 1950s when it was used to describe a machine’s ability to execute a job that would formerly have required the application of human intelligence. It remains a broad-ranging term which is applied these days to a machine or software used to perform tasks that normally require human intelligence, such as understanding language, recognising patterns, solving problems, and making decisions.
AI can be trained through various methods, including machine learning and deep learning, to improve performance over time. AI is a broader concept than Machine Learning, which is currently one of the main species of AI technology capturing the public’s attention.
Machine Learning is a more restricted term which applies to the application of AI in machines that obtain data and learn from it. Machine Learning often operates by using statistical methods to make classifications and predictions. By contrast, a wider concept is used for AI – one that encompasses a variety of technologies, including Machine Learning, natural language processing (or NLP – technology that trains computers to understand what humans write or say), and facial recognition.[1]
I won’t get bogged down in trying to refine these definitions, not least because I’m simply not qualified to do so. What we all know, however, is that these technologies are evolving at a rapid pace, in part due to the increased access to large datasets (often referred as Big Data).
Machine learning has evolved significantly in recent years in its ability to recognise patterns in data sets. It is confidently predicted that AI is now poised to reach human level in the imminent future.
While the application of this technology can be very useful, it may also be disruptive as well as worrying if used inappropriately or for purposes such as limiting the autonomy of people. Consider, for example, technology used in the Chinese province of Xinjiang – a region with a Muslim majority that uses a facial recognition system that allows the government to carry out mass surveillance of its citizens.
It’s the sort of technology that’s portrayed in science fiction films to put us in fear of a dystopian future. It has us living in a world in which government computers monitor and evaluate the behaviour of individuals, businesses, and government entities, and assign scores based on financial reliability, legal compliance, and social conduct, and in which high scores can lead to benefits like easier access to loans, while low scores can result in penalties such as travel restrictions.
What I’ve just described isn’t science fiction. It is, in fact, an accurate description of the Chinese Social Credit Scheme which has been operating for the last 10 years.
A couple of months ago, Chief Justice Andrew Bell, on the occasion of the bicentenary of the NSW Supreme Court, gave an interview to the ABC in which he specifically identified emerging AI as one of the significant challenges which his Court will need to face in the future. He said that high on the agenda of challenges is the increasing sophistication of generative AI — programs like ChatGPTwhich can produce content like text and imagery— and the complex issues it creates for the legal system. He cited a recent case in the ACT in which the court had been presented with a character reference which appeared to have been written with help from ChatGPT, which resulted in the court giving the reference little weight.
The Chief Justice went on to say that his concerns weren’t limited to the prospect of fraud or attempted fraud on the court through the use of AI, but extended to the need for the courts and practitioners to be cautious about so called “hallucinations” – the phenomenon which occurs when the program, which is designed to assist the user, simply invents an answer so as to provide the user with an outcome. Again, we have all heard of the notorious examples where AI generated submissions have been given to a court containing imaginary case references.
The recent emergence of applications such as ChatGPT has rapidly come to be a popular representation of AI. ChatGPT was said to be the fastest growing consumer application in internet history, acquiring 100 million users in the first month after its release in late 2022. In February 2023, Columbian Judge Juan Manuel Padilla Garcia posed several legal questions to ChatGPT, and included the chatbot’s replies alongside his own ruling.
Following extensive online debate, Judge Garcia remarked that while ChatGPT and other technology programs should not be used to replace judges, they can improve the efficiency of judicial proceedings by aiding in document drafting and performing other secretarial tasks: “by asking questions to the application, we do not stop being judges, thinking beings”, he said. Ironically, when asked by journalists abouts its role in the judicial system, ChatGPT itself appeared more reluctant than Judge Garcia, responding, “Judges should not use ChatGPT when ruling on legal cases … It is not a substitute for the knowledge, expertise and judgment of a human judge”.[2]
In a display of the intellectually rigorous and intensive research for which I am renowned, I asked ChatGPT to describe simply the ways in which AI could be used in mediations. In seconds, it gave me a remarkably concise and lucid response:
The integration of artificial intelligence (AI) in mediations represents a significant advancement in dispute resolution, offering both opportunities and challenges. AI’s role in mediations primarily revolves around enhancing efficiency and accuracy. For instance, AI-powered tools can analyze vast amounts of data from previous cases to identify patterns and predict outcomes, helping mediators craft more informed strategies. They can also facilitate the mediation process by managing schedules, sending reminders, and even offering real-time translation services, thereby bridging communication gaps.
Moreover, AI can support mediators in evaluating the dynamics of negotiations through sentiment analysis and behavioural cues, providing insights into the parties’ emotional states and potentially guiding interventions. This could lead to more nuanced and empathetic handling of disputes.
However, there are challenges and ethical considerations. The reliance on AI might undermine the human element crucial in mediation, such as empathy and understanding, which are essential for resolving conflicts. Additionally, concerns about data privacy and the potential for algorithmic biases must be addressed to ensure fair and equitable outcomes.
I couldn’t have put it better myself!
Then I asked it how AI could be used in evaluative mediations. Again, within seconds it gave me a very useful answer, which identified the valuable tools AI could bring to such a mediation, including data analysis, legal research, scenario modelling, sentiment and behaviour analysis, document review and management, risk assessment, and even bias detection.
It gave a similar answer when I asked a general question “How does a mediator use AI”, but interestingly its answer to that question included a couple of important caveats. One was that mediators must ensure that AI tools comply with data privacy laws and that sensitive information is securely managed. The other was to highlight what it described as The Human Element: “While AI can provide valuable insights and support, the human element remains crucial. Mediators should use AI as a tool to complement their expertise rather than replace human judgment and empathy.”
Then when I asked the machine what role ChatGPT has in the judicial system, it reeled off a list of predictable responses, such as case management, document review, and precedent research. But the response also included a confidentiality caveat and went on to state: “Decision-Making: ChatGPT is not used for decision-making in judicial processes. Judicial decisions are complex and require human judgment, empathy, and understanding that AI cannot fully replicate.”
Doing the best I can to read the tealeaves, I think we have to accept that the use of AI to help mediate disputes will be a feature of our practices in the near future. It is not commonplace yet, but I certainly can see the time soon when it features regularly as a tool to which we have regular resort – just as we now unthinkingly look at our smartphones to do everything from checking our diaries to checking our bank balances.
Currently, the biggest risk of using AI in mediation is the possibility that it will introduce errors into the process, such as the so-called hallucinations. And even ChatGPT itself seems to concede that when unchecked by human mediators, AI mediation risks running afoul of laws and ethical standards.
Generative AI is also ill-equipped to help parties cope with the strong emotions that often come up during mediation. One of our core skills as mediators is to manage emotions such as anger, frustration, and fear – which may be fuelling the conflict – and even assist the parties to channel those emotions in constructive ways to enable resolution of a dispute.
That being said, AI applications do provide us with valuable tools, such as in disputes involving large volumes of data, which AI can quickly sift through and analyse.
Another example of a useful tool is that concerns about ensuring that non-English speaking participants are actively involved and understand what’s happening can be overcome by pulling out your smartphone and opening a simultaneous translator like Google Translate.
AI chatbots can even be used to assist the mediator formulate the process of negotiation. For example, generative AI tools can pose questions aimed at identifying parties’ underlying interests, propose offers, and predict the likelihood that such offers will be accepted. You, as the human mediator, might opt to compare your own lists of questions to those generated by AI technology to make sure you haven’t missed anything.
Finally, just in case you’re wondering whether the future really is here, only a few weeks ago the Harvard Law School Program on Negotiation newsletter carried a report about the use of the ChatGPT chatbot in a mediation.[3] It recounted a recent episode in which an experienced mediator was mediating a dispute over the wrongful termination of a lease. The landlord was seeking $550,000 from the guarantor, who refused to pay more than $120,000.
With the parties at an impasse, the mediator asked ChatGPT for advice on what number to propose to the parties. The chatbot recommended $275,000. The mediator thought this was more than the guarantor would be willing to pay. Still, he asked the parties’ lawyers if their clients would agree to accept ChatGPT’s number—which would remain unknown to them—in the event of impasse. The parties agreed.
The prospect of abiding by ChatGPT’s advice motivated the parties to resume their settlement negotiations. Ultimately, the guarantor offered $270,000 – just $5,000 less than ChatGPT’s recommendation – and the landlord accepted. The two sides signed their settlement agreement, then asked what ChatGPT had recommended. After hearing the number, both sides remained satisfied with their negotiated deal.
Welcome to the new world of mediator’s bids!
[1] See generally Professor Pablo Cortes “Artificial Intelligence in Dispute Resolution”, CTLR, 2024, 30(5)
[2] R. Abbott & B. Elliott, “Putting the Artificial Intelligence in Alternative Dispute Resolution”, Amicus Curiae, 2023, 4(3),
[3] www.pon.harvard.edu/daily/mediation/ai-mediation-using-ai-to-help-mediate-disputes/