Written by Jennifer Leitch, NSRLP Executive Directororiginally published on Slaw, Canada’s online legal magazine.

A recent CBC article from British Columbia indicated that a self-represented party used Microsoft Copilot to assist with legal research: the artificial intelligence (AI) program generated 10 cases, nine of which were hallucinated. The hallucinated cases were ‘caught’ during a proceeding at the Civil Resolution Tribunal, but this incident and the possibility of others like it raises challenging questions for access to justice. It can be assumed that public-facing AI will continue to gain ground as a means of providing legal information and assistance, and as that happens, there will be more examples of hallucinated cases making their way into legal materials and submissions. We have already seen examples of lawyers being caught out for relying on AI-fabricated cases, so it is not surprising to see it happening among SRLs as well. However, the potential that publicly available AI applications may generate errors of all kinds will likely serve to exacerbate the very inequalities already existing between those who can afford legal services and those who cannot. For example, the more sophisticated (and accurate) closed-source applications that are available to law firms and large organizations are not publicly available or even accessible by individuals or the organizations that serve the public’s access to justice needs. The result is a growing gap between the legal tools available to paying clients and those available to individuals compelled to represent themselves.

Interestingly, to date, the very early data that the National Self-Represented Litigants Project (NSRLP) has gathered from self-represented litigants (SRLs) who complete our intake survey suggests that they are generally cautious about the use of AI applications like ChatGPT, given their current reputation for inaccuracy. However, at the same time, there is no denying that the speed and accessibility of AI programs will likely mean that they are accessed and deployed by more and more SRLs. Programs that assist SRLs in processing large amounts of legal information in a very short amount of time, as well as programs that generate court materials in the ‘voice’ and ‘style’ of court documents, are likely to be too tempting to pass up. In both capacities, AI appears to provide an opportunity to level the playing field, which SRLs are understandably desperate to do (particularly in cases where the opposing party is represented). Consequently, despite worries about inaccuracies and hallucinations, we will likely continue to see increases in the use of AI as an access to justice tool.

There is also a growing market for digital tools that deploy different forms of AI and proclaim to offer support to individuals navigating legal matters without legal representation. Amy Salyzyn’s recent report, “Direct-to-Public Digital legal Tools in Canada – A 2023 Update,” lists 118 different digital programs (both mobile and web-based) that are geared toward assisting individuals with various legal problems. While not all of these make use of AI, there is no question that this is and will remain a focus within access to justice in future.

In light of all this, how do we move forward ensuring that litigants are able to access reliable tools that effectively assist them when they are representing themselves, and how do justice system stakeholders prepare for the onslaught of cases in which digital tools of varying degrees of accuracy are being used by SRLs? The answers to these questions, like much else in the access to justice sphere, must encompass a multi-faceted approach that includes regulation, public awareness and information, oversight, and ultimately the development of partnerships between open-source AI platforms and developers, access to justice entities, and justice system users, that aim to create reliable and publicly-accessible applications to assist individuals with their legal issues.

While law societies are creating sandboxes to evaluate the use of digital tools in the legal services industry, this may not be sufficient to address the growth industry of legal service tools. This necessarily engages a larger question about sharing the legal services monopoly through sophisticated technologies and digital tools that provide litigants themselves with quick answers to discrete legal questions, or more involved multi-step direction regarding the resolution of a legal matter. This would require legal service providers to relinquish some control over legal service models that have failed to deliver access to justice to a vast number of individuals. While addressing this issue involves a broader series of questions and concerns than this column can adequately address, the preliminary point is a pragmatic one: digital tools and applications will be used – and therefore it is imperative that they be reliable and accessible by the public. Moreover, from an ethical standpoint, if the legal profession cannot fully deliver access to justice for the public (as per its mandate to act in the public interest), then it ought to support and perhaps facilitate access to reliable tools and programs that will assist individuals with their everyday legal problems.

Additionally, and in accordance with a pragmatic approach to the use of digital tools, it would seem that if people are going to seek out and use digital tools that profess to assist them with their legal problems, public legal education ought to turn its mind to educational programming that provides individuals with the tools and information to assess and test the validity and reliability of a particular digital tool. To this end, the Canadian Institute for the Administration of Justice (CIAJ) and the NSRLP are in the process of developing public-facing webinars for SRLs that will address uses of AI in service of their legal problems. These educational sessions are meant to ensure that SRLs are thinking carefully about how they use AI, and what steps they may need to take to ensure that the assistance they receive is accurate and reliable.

Another necessary consideration involves the role of the courts, which are likely to play a crucial part in the ongoing AI discussion in the very near future. Recent cases have provided examples of individual judges declining to read SRLs’ submissions based on the fact that they were AI-generated. While I acknowledge the challenges courts face in addressing the use of AI in legal proceedings, this seems problematic. For SRLs, in courts where it is not obligatory to disclose the use of AI in materials, those represented parties who make use of certain tools (often behind paywalls) will be free to deploy AI without consequence, while unrepresented parties who use AI and disclose the same could be left without submissions before the court. It is noted that the Federal Court and the Nova Scotia provincial court are two examples of jurisdictions that now require litigants to expressly identify the use of AI in their submissions, and this appears fair to all parties, but the question that must follow is what the court does with a confirmation from an SRL party that ChatGPT (or another publicly-accessible application) was used in the drafting of their submissions? Situations such as these will require courts to carefully consider the creation of policies that are fair to all parties, while also accounting for the fact that in the face of a lack of access to justice (and with few other options), litigants will inevitably and increasingly use AI to assist them in preparing for court when they are representing themselves. Moreover, SRLs are limited in the access they have to reliable digital tools, in comparison to legal service providers.

Finally, if we are to cautiously move forward with AI (and caution is underscored), then perhaps what also must be undertaken is a serious and sustained engagement between not-for-profit AI developers, access to justice organizations, and justice system users toward the development and implementation of AI applications that can assist individuals with their legal issues, and thereby improve access to justice. Such partnerships, grounded in academic and not-for-profit contexts, can work in tandem with legal regulators and the public to create applications that are reliable, accurate, and user-friendly. This would entail programming that both serves the specific and immediate needs of justice system users, and develops a collaborative community of those working in technology and access to justice organizations, to share knowledge and build legal AI applications that respond to the needs of both those serving individuals accessing justice, and these individuals directly.

3 thoughts on “The Inevitability of AI in Court: What Does It Mean for Self-Represented Litigants?

  1. Noel Semple says:

    This is very well said! I agree that the legal profession may be duty-bound to “facilitate access to reliable tools and programs that will assist individuals with their everyday legal problems,” including potentially AI. One possible mechanism for this would be CanLII, which is already funded by the law societies. Maybe the law societies — on behalf of the profession — should financially support CanLII in developing accurate, vetted AI-driven legal research services free to the public.

  2. Robert Giebelhaus says:

    Your organization seems to believe the Justice system can self correct itself. It cannot.

  3. Shortly after ChatGFT made its debut, I encountered a situation that underscored both the promise and the limitations of artificial intelligence in the context of legal research. Having already conducted comprehensive research using CanLII and compiled a set of relevant authorities, I turned to ChatGPT as a supplementary tool, hoping it might identify additional case law I had not previously uncovered.

    The platform returned several citations, including a few cases that were unfamiliar to me. However, my efforts to locate those particular cases through CanLII and Westlaw were unsuccessful. To verify their authenticity, I contacted the local law library and provided the case citations to a librarian, who agreed to investigate further and follow up.

    Approximately thirty minutes later, the clerk returned my call and inquired about the source of the citations. When I disclosed that they had been generated by ChatGPT, he informed me that he was unable to locate any such cases and outlined the legal databases and resources he had consulted in his search.

    Ironically, later that same week, I came across a news report concerning a junior lawyer who had cited AI-generated case law in court. The presiding judge addressed the issue but showed a degree of leniency, noting that the lawyer had only been in practice for five years. That incident resonated with me and reinforced the importance of verifying all research outputs, regardless of the source.

    Since that time, I have elected not to rely on artificial intelligence for ANYl research purposes. Nonetheless, I continue to find it useful for organizing ideas, structuring arguments, and refining written communication. In my view, AI can be a powerful and effective tool when used judiciously and within its appropriate scope. However, like any tool, its utility is determined by the knowledge and discretion of the user.

Leave a Reply to Robert Giebelhaus Cancel reply

Your email address will not be published. Required fields are marked *