Written by David Lundgren, University of Toronto student researcheroriginally published on Slaw, Canada’s online legal magazine.

In Canada, self-represented litigants (SRLs) are generally disadvantaged from the onset of their case and throughout the legal process. Litigants are often driven to self-representation by financial constraints or a lack of available resources. Cultural and linguistic barriers, mistrust of the justice system, and negative socioeconomic factors also influence their decision to self-represent. These considerations manifest negatively in SRL experiences and persist throughout cases. In court, self-represented litigants tend to fare worse; they are misperceived as vexatious and misinformed, or simply made to feel they do not belong in the courtroom. Their shared experiences dealing with the legal system tell a story of desperation, betrayal, and need.

Despite this, policymakers are doing little to tailor solutions to the specific issues plaguing SRLs. At the root of this crisis, there is significant concern over inadequate data tracking SRLs, which prevents decision makers from creating evidence- and data-based policies to treat the self-representation problem. This neglect leads to misperceptions of who SRLs are and the challenges they face.

However, there is still hope to address these issues. Innovative technologies and the rise of Artificial Intelligence (AI) enable us to unlock more valuable and relevant insights from conventional data. In redefining the boundaries of our human capabilities, we can now inject empirical analyses and tools to uncover more than what meets the eye. As the world transforms into an increasingly data-driven and digital space, self-represented litigants cannot be left behind.

AI is fundamentally different from inferential tools that previously characterized the self-representation problem. To reveal more in-depth insights, we need more in-depth data. It is time to examine how AI can be used to investigate the self-representation issue in Canada and how big-data learning overcomes the challenges that impede conventional statistical methods.

How can AI address the self-representation problem?

AI thrives on data. Whether it is numbers, texts, images, or even audio, AI tools are remarkably good at extracting hidden information that humans fail to register. However, the field of self-representation is characterized by a lack of data, specifically quantitative data. Quantitative data refers to countable or measurable numerical values and is often what most people think of when they hear the word data. This is also what most conventional statistical tools rely upon to extract information. The reason for this is that computers think in their own numerical language. Therefore, quantitative data is more or less a computer’s bread and butter.

However, the lack of quantitative data on SRLs makes it challenging to understand the entire scope of the problem and to identify trends and patterns that could inform policy decisions. While some quantitative data is available, it is often limited and does not provide a comprehensive picture of the experiences of SRLs. Because conventional tools rely on access to large amounts of quantitative data, this restricts traditional analyses’ inferential and predictive power. But AI is far from traditional.

AI trumps traditional statistical methods using its ‘intelligence.’ Indeed, AI pushes past the boundaries of quantitative analyses by understanding qualitative sources as well, such as texts, images, or audio. Fortunately, the wealth of qualitative data available, such as surveys, interviews, blog posts, and social media discussions, can overcome the limitations of sparse quantifiable data, and provide valuable insight into the experiences of SRLs.

This is where AI can play a significant role in addressing the issue. AI can better analyse the qualitative data and provide a more nuanced and comprehensive understanding of who self-represented litigants are and the issues they face. This could include identifying common themes in their experiences, such as the challenges of navigating the legal system without legal representation, the impact on their mental health and well-being, and disparities in access to justice based on socioeconomic status.

Furthermore, AI can also identify specific needs of self-represented litigants, such as the need for legal education and resources and greater support and guidance from the courts. This could inform policy decisions and initiatives to improve access to justice.

In addition to providing information to policymakers and advocating for a data-first approach, AI can be deployed directly to SRLs. Virtual assistants and chatbots, such as ChatGPT, could endow self-represented litigants with the answers to common legal questions, help in drafting documents, and provide tips and guidance on courtroom formalities. This could mitigate the undersupply of legal services, geographic and linguistic barriers, and the costs of existing legal options.

While AI has the potential to benefit both SRL advocates and research groups, and self-represented litigants themselves, it is of utmost importance to account for ethical considerations. AI is primarily intended to augment human intelligence, not replace it. While it is a fantastic way to extract patterns from large-scale and unconventional data, these patterns also need to be interpreted and checked by people. For instance, there is a risk of bias in the data used to train the AI algorithms, which could lead to inaccurate or unfair conclusions. There is also a risk of privacy breaches if the AI analyses personal information without proper safeguards. Specifically, in the case of direct use for self-represented litigants, it is crucial to note that AI can provide legal information but cannot offer legal advice. As such, it is essential to approach AI in this context with caution and prioritize ethical considerations throughout the process.

How can AI benefit the NSRLP?

The National Self-Represented Litigants Project (NSRLP) is at the forefront of organizations advocating for a data-oriented approach to address the self-representation problem. It is, therefore, no surprise that they are looking toward AI to uncover more information on how to help SRLs. Over the past nine months, I have been fortunate to work with the NSRLP to re-examine their annual litigant intake data using AI. My full final report on this work can be found here.

“Inaccessible Justice” is a qualitative and quantitative analysis of demographics, socioeconomics, and experiences of self-represented litigants in Canada. The report utilizes AI to extract sentiments behind self-reported experiences of litigants and correlate positive or negative experiences with other characteristics, such as income, age, gender, and ethnicity. It uses various AI models and tools to analyse SRL data comprehensively. It further promotes the NSRLP’s mission to adopt an evidence-based framework.

The report also includes an introduction to AI written in plain and easy-to-understand language for anyone interested in learning what AI is, how it works, and its applications. My goal in writing the report was not only to uncover new and hidden conclusions about the self-representation problem but also to introduce how AI can be used in this field. As such, my report suggests avenues for future research and highlights that new AI and digital approaches can support the NSRLP in raising awareness for the needs of self-represented litigants and their experiences.

In short, the inadequate data tracking of self-represented litigants in Canada is a significant issue that leads to incomplete perceptions of who they are and the challenges they face. However, by using AI to analyse qualitative data, we can gain a more realistic understanding of their experiences and needs, which can inform policy decisions and initiatives to improve access to justice for all.

7 thoughts on “Technology Is Changing, and So Should Our Approach to the Self-Representation Problem: Artificial Intelligence for SRLs

  1. Herman Petersen says:

    The power pyramid vs. people pyramid (upsidedown) must be addressed to maintain some semblance of balance or fairness. It serves no purpose to have lawyers using summary motions, (as nauseum) to defeat the process. Lawyers will parachute a lawyer in who happens to be a son of the judge. Look to basic operations, first, and, rectify the system or power imbalance.

  2. Peter Austin says:

    I see a day when evidence will be routinely entered by both parties for sorting and reconciling of the facts by a system of analytical AI, such that when two sides differ drastically, the AI will sort the fact from fiction and request or demonstrate why/where each side lacks in evidence or logic.

    It’s been my experience that Lawyers routinely mislead (lie) in Supreme Courts on behalf of clients. AI can mitigate the perjury before a judge is tasked with sorting out the lies. This will help cleanse the system and will help ease our dependence on judicial, (and far too often prejudicial), judgments.

    The legal industry has not done well enough for our society after tens of decades of independence and self-regulation. It’s time now for AI to begin to factualize data to ensure more just outcomes. If AI says it is fact, then it becomes hard(er) for judges to go against that evidence due to personal preference or bias.

  3. Konesavarathan Kovarthanan says:

    A big applause to the student author for writing this. The author is advocating for innovation in the justice system, which is great! The question remains how willing are the judges and lawyers to adopt such new innoavtive ideas. They seem to behave like they do not need to know anything, but the opposing party or self-represented litigant, particularly the rights claimants, have to educate them through their resistance to persuation. They are beyond any evaulation or research on the effectiveness of their performance. They will certainly be resistant to any change.

  4. Chris Budgell says:

    To this point at least, to most people AI is just a vague promise or a threat, or both.
    .
    The access to justice problems pre-date the Internet. They pre-date the desktop computer. Technological advances so far haven’t really done anything to address the challenges SRLs face. I’m looking at something else. The agenda of the legal establishment. Here are two pieces of evidence. First this programme – https://www.cbapd.org/details_en.aspx?id=na_na23jus01a – which was at least publicly accessible while the event was still being put together, though clearly the public wasn’t invited to the table. I’ve just sent another email to four of the people pictured there. And secondly the programme yet to be shared with the public of the 23rd Cambridge Lectures announced over six months ago with this notice – http://canadian-institute.com/english/speakers-e.html . With some persistence I finally received a promise that the programme will be published some time after the event is over. I note that the public has never had a seat at that table, though it appears to me that public money is financing it (at least to a large degree), and the Canadian media has never had a word to say about these gatherings.

    1. shawn penney says:

      Hey Chris Budgell,

      Do you think we can put together a study group? It would be nice to have people getting together and showing court results.

  5. Chris Budgell says:

    Further to my comment above about the Cambridge Lectures, the first of which was held in 1979, contrary to the reply I received to my inquiry about the current event, now underway, the programme for the July 2019 Lectures was already online in October 2018. I’ve found an email I sent on October 11, 2018 to one of the participants noting the misspelling of his name and including this URL – http://canadian-institute.com/english/lectures-2019e.html . I see no reasonable explanation for the fact that the current programme has still not been published and I note that it appears to me that a considerable amount of public money is being spent (and just how much I hope to find out at some point).

  6. . says:

    What security protocols have the justice system designed and implemented to secure and ensure files are not altered electronically, that judicial proceedings are not altered via AI, that cases thus caselaw is authentic and unaltered? That email correspondence has not been intercepted? Doxxing and so forth has not occurred? As one example of the dangers of AI lets turn to the SAG strike and the issues being disputed therein. Actor identities being cloned and resourced in future productions so that the real-life actor is not required in the future because their image has been collected and stored by studios for future use. How does one ensure the judge and parties on their computer screen have not been altered by third parties? That a usb key with disclosure sent does not contain a spyware virus to be implanted on the opposing parties laptop? What happens when a large ongoing file of 5 years…goes missing from the courthouse? And in other cases where access to files is simply obsolete because the courthouse can’t seem to find it? What if one’s search for caselaw is affected by AI so that a party is being directed, without their knowledge, to specific caselaw not favourable when better caselaw actually exists? Who is minding the security of AI and court files and proceedings?

Leave a Reply

Your email address will not be published. Required fields are marked *