Generative AI and the Bar – A Long Way from the Horse and Cart but still a Fair Way to Travel!
BY
Hearsay
1290 Views
Thursday 7th March, 2024
Generative AI and the Bar – A Long Way from the Horse and Cart but still a Fair Way to Travel!
All lawyers, including barristers, are being challenged to use artificial intelligence technology in their conduct of practice. Suffice it to say it is not yet foolproof, nor fully developed. For the assistance of the Bar, Hearsay provides below links to a raft of recent commentary concerning barristers’ – and, generally, lawyers’ – use of Generative AI. One inimical issue which has arisen is the propensity of generative AI to “hallucinate”. The term is a verb which, in this context, is defined in the Oxford Dictionary as meaning:
… to produce false information contrary to the intent of the user and present it as true and factual.
Such is the currency of that fresh meaning that the word was announced as the 2023 “word of the year” by the Oxford Dictionary, the Cambridge Dictionary and Dictionary.com respectively.
Counsel – in their provision of oral and written advice, and in making submissions to court – need harbour a good understanding of the benefits – and also the potential weaknesses – of generative AI use. The articles below are afforded for that purpose. After perusing the same, or some of such articles, the reader will have the benefit of a member of the Queensland Bar – Adrian Duffy KC – commenting upon such use.
Letters to the Editor on this topic, by members of the Bar, are encouraged and will be published in the next issue.
Much has been written about the impact that generative artificial intelligence will have on the practice of law.
There can be little doubt that artificial intelligence is likely to have a significant impact. A recent International Monetary Fund paper concluded: [1]
Artificial intelligence (AI) is set to profoundly change the global economy, with some commentators seeing it as akin to a new industrial revolution. Its consequences for economies and societies remain hard to foresee. This is especially evident in the context of labor markets, where AI promises to increase productivity while threatening to replace humans in some jobs and to complement them in others.
The paper went on to opine that 40 percent of global employment is exposed to AI, but in advanced economies, that figure is about 60 percent.
AI has been described as “the next revolutionary change in civil litigation” and having an effect akin to the Industrial Revolution. [2] It has been suggested that lawyers who do not embrace AI will be left behind. [3]
There seems little doubt that implementation of generative artificial intelligence into legal practice will have profound effects, even if it is going too far to say it will be transformative. It may threaten employment, as there are respectable views that a draft memo about a legal question produced by ChatGPT will be approximately as good as what a first-year lawyer would produce. [4] However, it is equally valid to anticipate that it will enhance the practice of law. [5]
Some Terminology
There is little doubt that various applications of artificial intelligence are becoming more widespread and it is developing very quickly. But what are we talking about when considering the impact of artificial intelligence on legal practice? It may be helpful to identify some key concepts and terminology. [6]
Artificial intelligence, commonly identified by the acronym AI, may be described as the intelligence of machines or software, as opposed to the intelligence of living beings, mostly humans. It is widely available and widely used in applications such as internet search engines such as Google, recommendations systems such as Amazon and speech-based assistance systems such as Siri. [7]
Generative artificial intelligence (GenAI) is AI that is capable of generating text, images and other data. GenAI programs or models learn the patterns and structure of their input data and generate new data that has similar characteristics. [8]
The most commonly known GenAI model, at least to lawyers, is ChatGPT. [9] It is openly available and, for the basic level model at least, free.
The most common use to which barristers may be likely to put a GenAI model such as ChatGPT to use is in drafting submissions and opinions. It may, as the technology develops, become useful in drafting pleadings as well.
Reliability Issues
Many barristers will have had the experience of discovering that one or more authorities cited by an author for a proposition, even in respected texts, do not in fact support the posited proposition. While such experiences are likely to be thankfully rare, they occur often enough to serve as a warning against using cited cases as authority without checking that they do indeed support the relevant proposition.
So it is with drafts produced by other lawyers, whether instructing solicitors or junior counsel. A barrister needs to be confident that an authority cited in an opinion or a submission does stand as authority for what it is said to be authority for.
The fact that the reliability issues are well publicised means that it is very likely that reliance by a barrister upon drafting done by AI without having checked it would expose the barrister to a claim in negligence if damage is suffered. [10]
There may be other exposure as well, by way of disciplinary action.
There have been examples of lawyers using GenAI in the context of submissions to courts, where the documents produced contained serious inaccuracies that passed uncorrected by the lawyers using them. These inaccuracies have even extended to the GenAI model citing cases that did not in fact exist. This phenomenon has its own name – hallucination. [11]
A relatively well-known example is the case of Mata v Avianca, Inc, a personal injury case heard in the United States District Court for the Southern District of New York during 2023. [12] The claim arose out of injuries alleged to have been suffered by Mr Mata during international carriage by air.
The respondent applied to dismiss the action by reason of the claims being time-barred under the Montreal Convention.
The Plaintiff’s lawyers filed a response to the motion and it was that response that contained what may generously be called the inaccuracies. I will return further below to the process by which that response was “settled”.
In fact, it emerged that a number of the cases cited in the response were fabricated. The cases had names that were unable to be traced. One or more had docket numbers (apparently the equivalent of court file numbers) that in fact belonged to other cases. One or more had citations to reports that were in fact the citations of other cases.
When the court directed that an affidavit be filed attaching copies of the decisions in question, the Plaintiff’s lawyers filed one which attached extracts of what purported to be most of those decisions. Those extracts were, of course, themselves fabricated and were found to contain references to other fabricated authorities. One of the cases was found to contain legal analysis that the Judge described as “gibberish”. [13]
There were also examples of genuine cases with correct citations, but which did not support the propositions for which they were advanced. [14]
Numerous other deficiencies emerged.
There was evidence that at some point, the lawyer concerned had interrogated ChatGPT about one of cases, asking whether one of the cases was “a real case”. The evidence was that ChatGPT answered saying that it was a real case and giving a citation to the report. Another query asking whether the other cases fakes was answered:
No, the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw.
There is no clear finding as to whether the evidence given as to these subsequent queries was true. If the answer extracted above was one given by ChatGPT then it would appear to be a plainly false answer – ChatGPT not having access to those databases itself. [15]
The Plaintiff’s lawyers ultimately admitted that six cases that had been cited were generated by ChatGPT and did not in fact exist. The lawyers were sanctioned by the court.
It should be noted, however, that the reliance upon ChatGPT was not the only conduct for which sanctions were imposed. There were other incidents throughout the proceeding which the court found involved misleading conduct on the part of the lawyers concerned. In some respects it is a case study in what not to do when filing documents and making submissions to a court. That is not, however, to diminish the seriousness which the court attributed to the conduct in filing material with false authorities cited. It seems likely that even without the other misconduct, the court would have imposed sanctions for that alone.
Aside from what occurred in Mata v Avianca, many examples of hallucinations are available. Some are referred to in this issue of Hearsay. [16]
Misinformation released into the public domain is not the exclusive province of Gen AI, of course. Most people would accept that photographs produced in magazines will have been “touched-up”, for example to remove blemishes. But those people would expect that the photograph at least depicts something “real”. There are many examples of photographs that have been concocted – for example by inserting a person or thing that was not there when the original “photograph” was taken, or by removing them. Those have been the product of GenAI technology, as well as by simple human intervention – by photoshopping or editing.
While the public may expect, and accept, some level of “touching-up” in that context, in the case of legal submissions and opinions there is no such latitude. It would never be acceptable to falsify an authority, even for a relatively minor point. Even flourishes of language or not viewed with favour. It is never proper to submit or argue, in a misleading way, that a particular authority stands for something more than it truly does.
There are, of course, questions of degree in advancing an argument that a particular authority ought be taken to mean something – the test is usually expressed to be whether such a position is “arguable”. It is an entirely different thing to argue that there is some authority that simply does not exist.
Barristers will be familiar with the experience of a text citing authorities for a proposition that, on examination, do not in fact support that proposition. While fortunately rare, experience shows that such a thing can occur even in well-respected texts. Is counsel free to simply cite authorities for a proposition because they have in turn been cited for that proposition in a textbook? The answer must be either “no” or at least that it would be foolish to do so.
Another significant issue in the use of GenAI bears mention. Publicly available models, such as ChatGPT use all the information that is available to it for the model to “learn”. Thus, if information that is confidential is input so as to generate text or answer queries, the model will learn from that information. So if details of a confidential contract are input for the purpose of, for example, generating a comparison or a written submission, those details may be subsequently available to someone else running an applicable query. Putting such information, at least potentially, into the public domain, will be a breach of a barrister’s obligations.
In the introductory paragraph to in Mata v Avianca, the District Court Judge, Castel USDJ said:
In researching and drafting court submissions, good lawyers appropriately obtain assistance from junior lawyers, law students, contract lawyers, legal encyclopedias and databases such as Westlaw and LexisNexis. Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings… [The Plaintiff’s lawyers] abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.
The problem that arose in Mata v Avianca was not uniquely an AI problem. It was rather a failure of the lawyers to apply their own legal expertise when blindly relying upon what had been provided to them.
There are, of course, other examples of where lawyers rely upon AI in the conduct of litigation. One example is in disclosure (or discovery) of documents. Because of the prevalence of electronic records, it is very often necessary to conduct searches for disclosable documents by interrogating relevant databases of documents using relevant search terms. [17] In those instances, lawyers routinely disclose (often in affidavit form) what search terms were used and how searches were conducted. But that does not mean that blind reliance on the results of computer-based searches is acceptable. Sometimes the results may indicate that there was some flaw in the methodology because documents which obviously ought to exist are not identified by the searches. Professional duty requires, in such cases, that further enquiries are called for in such circumstances.
Furthermore, it could hardly be suggested that conducting computer-based searches would entirely relieve a solicitor of their professional duty to examine documents where necessary and form judgments about relevance and disclosability.
The courts rely upon counsel to undertake proper and thorough research and to only make soundly based submissions. Clients rely upon counsel to give proper and sound advice, properly supported by thorough research.
In addition to the general duties that exist, Rule 12 of the Queensland 2011 Barristers Rule[18] prohibits a barrister engaging in conduct that is dishonest or discreditable, prejudicial to the administration of justice or likely to diminish public confidence in the profession or bring it into disrepute. It would seem fairly likely that presenting a submission or rendering an opinion that includes false authorities, whether produced by GenAI or nor, would contravene one or more aspects of that Rule.
Legg and Bell [19] argue that artificial intelligence will not replace what is perhaps the central feature that a good lawyer brings to a matter – judgment. They cite Allsop CJ speaking extra-curially, where his Honour said: [20]
… It involves being human and recognising the human elements at play in a dispute. It involves recognition and appreciation of the whole. It involves bringing wisdom to resolution of the dispute. It involves wisdom in presentation of the case. It involves integrity, respect and civility. These involve and comprise decent human behaviour. They involve insight into one’s self. The dispute is not about you. The case is not about you. Independence (and the degree of abstraction within it) involves the recognition of the significance of the dispute to the lives of the humans involved. Every advocate (and every judge) should be conscious that what might seem a routine or banal case may represent the most significant and potentially catastrophic event in the lives of the people involved.
Legg and Bell go on to opine [21] that judgment has come to symbolise “a non-automatable collection of exclusively human qualities or capacity.” They argue that lawyers’ judgement will continue to be important and will become more valuable with the increased use of automation.
Far from replacing lawyers, GenAI is likely to become an important and valuable tool in the armoury of the good lawyer. The good lawyer will, however, still bring valuable input to the legal process. The duties owed by barristers – to the court, to the administration of justice, to their clients – are ingredients that will be difficult to replicate in automation.
Settling Documents
A further extract from Mata v Avianca in relation to the process by which the response filed by the Plaintiff was settled, is worth extracting: [22]
Although Mr. LoDuca signed the Affirmation in Opposition and filed it on ECF, he was not its author. It was researched and written by Mr. Schwartz. Mr. LoDuca reviewed the affirmation for style, stating, “I was basically looking for a flow, make sure there was nothing untoward or no large grammatical errors.” Before executing the Affirmation, Mr. LoDuca did not review any judicial authorities cited in his affirmation. There is no claim or evidence that he made any inquiry of Mr. Schwartz as to the nature and extent of his research or whether he had found contrary precedent. Mr. LoDuca simply relied on a belief that work produced by Mr. Schwartz, a colleague of more than twenty-five years, would be reliable. …
(references omitted)
This was, it may be concluded, a dangerous approach, irrespective of the difficulties that arose from the use of GenAI. There is no doubt that barristers routinely rely upon drafts done by others. It is in varying degrees an integral part of the working relationship between junior and senior counsel and between counsel and instructing solicitors.
There is no explicit provision on the 2011 Barristers Rule dealing with counsel settling documents drafted by other counsel or by solicitors. Note ought perhaps be taken of Rule 107 dealing with devilling. [23] It requires a barrister to “take full personal responsibility for the work”. That is not entirely clear, but it appears to call for more than merely accepting responsibility in a liability sense. It is likely to require that the barrister take responsibility in the sense of satisfying themselves that the work is soundly based and correct.
So it is with settling documents drafted by others. Courts require that the lawyers who prepare documents filed or to be used in court be identified. [24] That is no idle thing. It signals the significance of the obligation of the lawyer who prepares or settles such documents to ensure their integrity.
Just as it was never much of a response for counsel to say that they relied upon the fact that a text (or another judgment) cited certain authorities for a proposition if they had not been checked, so it will not be a proper response to say that the cited authorities were produced by a GenAI process.
Beyond ChatGPT
From the time that GenAI became available, it seemed inevitable that major commercial operators would bring proprietary models to market.
That has become reality, perhaps sooner than expected. Lexis+AI is available as a US-based platform and is available for preview in Australia. [25] It will be available in Australia later in 2024. Thomson Reuters is reportedly releasing its platform some time later in 2024. There will no doubt be others.
These legal publisher GenAI platforms are anticipated to overcome one of the significant shortcomings of ChatGPT – namely the lack of access to published databases. Of course, that will not necessarily eliminate the problem that an authority referred to in a published decision may not, on examination, stand for the proposition that the judicial officer cites it for. One would hope that such a problem would be rare indeed, however.
Furthermore, they are expected to be “hallucination free”, or at least that is what is claimed.
Even if those platforms deliver on those expectations and thus be more reliable, that will not eliminate the necessity of barristers checking the drafts that are produced and satisfying themselves of the veracity of the submission or opinion produced.
There are other models as well. Microsoft has developed Copilot, which many will have seen is available with – among other applications – Microsoft Edge. Although it is not a law specific tool, it is capable of generating responses in much the same way as other GenAI models.
As an example, in response to the question: what is the obligation of a Queensland barrister when conducting legal research? – Copilot gave the response set out in the Annexure hereto. That may be thought to be a reasonably sensible response, but it can also be seen that the response is cast in quite general terms. If a barrister was writing an opinion or submission on that topic, that answer may be of little utility. The utility may improve upon interrogation of the model, of course, but such an experiment is beyond the scope of this paper.
This paper does not attempt to identify all available models or platforms, nor to try to argue the virtues of one over others. All of these models are developing, and quickly. ChatGPT itself is at its fourth or fifth version in a relatively short space of time. Like all technology, it can be expected to improve and hopefully eliminate deficiencies as they emerge.
Implications for the Bar
We live in exciting times, even if not cursed to live in “interesting times”. Developments in the field of GenAI are fast-moving and significant. [26]
But do they threaten the Bar? Arguably, they do not.
The development of GenAI is likely to have an impact on practice at the Bar – just as many other technological developments have.
Many who have been at the Bar for a long time will lament the transformation that various developments in technology have meant for practice – the 24 hour and 7 day a week availability, the fast turnaround times required, the compressed time within which to give mature consideration to matters. There have been undoubted benefits – the extraordinary availability of legal resources at one’s fingertips for example. But that availability has brought with it an ever-expanding demand for more and more research over multiple jurisdictions.
Through all of these developments – as well as substantial changes in legislation and other areas affecting practice – the Bar has survived and continues to fill a crucial role in the administration of justice and the provision of legal services. Things have changed, but barristers have adapted.
So it will be with GenAI. It may have some effect on the way barristers do their work. For example, it may serve as the starting point for a draft submission. That may make the drafting a more efficient and cost-effective process. But it is unlikely to mean that the role of counsel will diminish.
Some might fear that such developments may signal the end of junior counsel producing drafts for settling. That is doubtful.
For the same reason that a good barrister would not just slavishly cut and paste from a previous submission or opinion without critical review, so it is unlikely that good barristers would rely without proper consideration upon drafts produced by GenAI.
To suggest that a good barrister is no more than a person who can research well and can write well in as invalid as suggesting that a good barrister is no more than a good orator. It ignores the critical aspects of judgment, professionalism and duty already discussed. Those obligations continue. [27]
It is not, of course, possible to accurately predict what impacts GenAI will have on practice at the Bar in the future. It is doubtful, however, that the legal and ethical obligations that are at the heart of practice at the Bar will diminish. It may even be that the significance of the professional judgment that barristers bring to a matter will increase with the proliferation of information availability and its propensity to potentially not be accurate.
Barristers should, if not embrace Gen AI, then at least welcome it in and discern how – consistent with the obligations that a barrister harbours – Gen AI may enhance the way that the barrister is able to deliver services required.
ANNEXURE
2011 Barristers Rule
A barrister must not engage in conduct which is:
a. dishonest or otherwise discreditable to a barrister;
b. prejudicial to the administration of justice; or
c. likely to diminish public confidence in the legal profession or the administration of justice or otherwise bring the legal profession into disrepute.
A barrister will not have breached Rule 16 by carrying out a specific task of research or chamber work given to the barrister by another barrister, or by giving such a task to another barrister, so long as:
a. the barrister who was briefed to do the chamber work takes full personal responsibility for the work;
b. the work is delivered under the name of the barrister who was briefed;
c. the arrangement between the barristers does not go beyond an ordinary devilling or reading arrangement and in particular does not involve any standing retainer or employment terms; and
d. the arrangement between the barristers does not provide and is not intended to enable the barrister giving the task to make a profit from the other barrister’s work, over and above reasonable remuneration for supervision of and responsibility for the other barrister’s work.
Copilot Response
Question:
What is the obligation of a Queensland barrister when conducting legal research?
Copilot:
In Queensland, barristers have specific obligations and standards of conduct when conducting legal research. These rules are outlined in the Barristers’ Conduct Rules established by the Bar Association of Queensland. Let’s delve into the key aspects:
Independence and Professional Standing:
Barristers must maintain complete independence in their conduct and professional standing as sole practitioners.
They acknowledge a public obligation based on the paramount need for access to justice and must act for any client within their field of practice.
General Principles of Professional Conduct: Barristers must adhere to the following principles:
Paramount Duty to the Administration of Justice: Their primary duty is to uphold the administration of justice.
High Standards of Professional Conduct: Barristers must maintain integrity and ethical behavior.
Honesty, Fairness, Skill, and Diligence: They must act honestly, fairly, skillfully, and diligently.
Duties to Courts, Clients, and Colleagues: Barristers owe duties to the courts, clients, and fellow legal practitioners.
Independent Forensic Judgment: They should exercise independent judgment for the proper administration of justice, even if it conflicts with client desires.
Advocacy and Representation: The provision of advocates for those needing legal representation is better secured if barristers:
Accept briefs regardless of personal beliefs.
Refuse briefs only on proper professional grounds.
Compete as specialist advocates with each other and other legal practitioners.
Interpretation and Application:
These rules should be construed to promote the objects and principles expressed.
General provisions should not be read narrowly due to specific or illustrative provisions.
Remember, these rules apply throughout Australia to all barristers, ensuring ethical conduct and the highest standards of legal practice.
[1] Cazzaniga and others. 2024. “Gen-AI: Artificial Intelligence and the Future of Work.” IMF Staff Discussion Note SDN2024/001, International Monetary Fund, Washington, DC, USA
[2] Andrew Goddard KC and Laura Hussey, AI: May the force by with you, Counsel Magazine, 15 January 2024’; see also Graham Denholm, Generative AI: Change, opportunity, risk?, Counsel Magazine, 11 September 2023
[3] Cindy Cameronne, Lawyers who don’t embrace AI ‘will be left behind’, experts say, Lawyerly, 12 December 2023
[4] Jeff Neal, The Legal Profession in 2024: AI. Interview with Professor David Wilkins, Director of the Center on The Legal Profession at Harvard Law School, Harvard Law Today, 14 February 2024
[5] Michael Legg and Felicity Bell, Artificial Intelligence and The Legal Profession: Becoming the AI-Enhanced Lawyer, (2019) 38(2) University of Tasmania Law Review, 34-59; [2020] UNSWLRS 63
[6] There are other terms that may be encountered by readers, such as LLMs (large language models that are AI models trained on very large language-based data sets), ML (machine learning, which describes computer systems that can learn and adapt without following defined instructions, by using algorithms to analyse data)
[19] Michael Legg and Felicity Bell, Artificial Intelligence and The Legal Profession: Becoming the AI-Enhanced Lawyer, (2019) 38(2) University of Tasmania Law Review, 34-59; [2020] UNSWLRS 63, 54-5
[20] James Allsop, The Future of the Independent Bar in Australia, Australian Bar Association and NSW Bar Association Biennial Conference, Sydney, 17 November 2018; 62.pdf (austlii.edu.au)
[21] Michael Legg and Felicity Bell, Artificial Intelligence and The Legal Profession: Becoming the AI-Enhanced Lawyer, (2019) 38(2) University of Tasmania Law Review, 34-59; [2020] UNSWLRS 63, 55
[24]Uniform Civil Procedure Rules, Rule 146(1)(g); Federal Court Rules, Rule 2.16(1)(b), Rule 16.01; Rule 44.06(d)(i) of the High Court Rules requires that written submissions be signed by “the senior legal practitioner who is to present the case in court”
[26] An excellent CPD was delivered by Stephen Foley and Professor John Swinson, Riding the AI Rollercoaster: Safely and Ethically Harnessing the Power of AI in Your Legal Practice, 26 September 2023. The recording and slides are available on the Bar Association website
[27] Daniel Kiley and Others, Artificial intelligence & large language models, A practical insight for today’s barrister, Victoria Bar News, (2023-24) Issue 174, 45, 47