News

Artificial Intelligence and the Law

Light bulb with artificial intelligence and person pointing at screen

Artificial intelligence (AI) is expanding its reach into almost every industry and into our everyday lives. Massey Law Group’s Starlett Massey, Attorney and Founding Shareholder, recently spoke with Masheika Allgood, Founder of AllAI Consulting about AI, how it works, its impact, and the legal questions and concerns that arise from its use.

 

You have been studying and researching the intersection of artificial intelligence and the law for several years. How would you describe your greatest concern regarding the role AI plays in the delivery of justice in this country? 

My greatest concern is that it plays a role at all. What gets lost in the glitz and glamour of AI technology is that, despite its name, artificial intelligence isn’t intelligent at all. Not by any recognized definition of the word. And we currently have no idea of how to make the leap from a system that calculates really fast and recognizes patterns to one that can understand concepts of fairness, justice, or equality.

There’s also the assumption that the issues we encounter with AI today, namely bias, are not that serious or that the technical community will overcome them in the very near future. But bias — against black people, women, the disabled, basically any classifiable minority group — is inherent in AI. Bias is a fundamental part of AI architecture and is introduced at the core of every AI model.

Despite our smartest people putting forth their best efforts — we have no good ideas for addressing this issue. Despite this fact — we’re pushing forward with AI across our society. Codifying this bias into our legal decision-making systems. Corrupting the only pillar of society specifically created to protect the populous from biased operators.

 

You have stated that with the continued implementation of AI, the historically disenfranchised will stay so. But you have also stated there will be new classes of disenfranchised. Can you explain?

 The foundation of AI bias is the training data. We train AI on sets of data pulled from several sources. The data set(s) we use to train an AI model forms its entire understanding of the world for all time. If a group is not present in the training data it does not, and will never truly exist as a complete entity in the model.

Any new data the model obtains during its use has to be calculated in terms of the foundational relationships that were learned from the training data. So, new groups will always be an afterthought in the calculations. This can have dire consequences that the law doesn’t currently protect against or provide redress for.

At the moment, the data sets we use to train algorithms contain very little, if any information about the majority of the US population. For example, the majority of health AI models are trained on data from three states: California, Massachusetts, and New York.  But many health factors and risks are geographic. Meaning that the AI systems that are built on these models disenfranchise the majority of Americans.

The AI models will not perform as accurately in Florida or any other state since the algorithms have no knowledge of the geographic health factors or risks people face in those states.  The models are unable to consider those health factors or risks in their calculations. Currently, the law provides no redress for people who are discriminated against based on their state of residence.

Do you ever wonder why auto-translation on YouTube or other streaming platforms is great for some videos and horrible for others? It’s because the Natural Language Processing (NLP) algorithms that underlie these systems are trained on formal language. But people have accents, we don’t all learn the same “formal” English, and even those of us who were trained to speak formally don’t do so in all settings.

So, an AI model trained on formal language will disenfranchise those who don’t speak that way. This is what we see with Alexa discriminating against people with accents. The situation becomes even more concerning when you consider the push for AI models to be used to automate critical social systems, like 911 or court transcription. Currently, the law provides no redress for people who are discriminated against based on how they speak.

 

Are there forms of AI that can be used to promote inclusion by providing accessibility to a legal system that otherwise has very high-cost barriers? 

That is the marketing stance but, unfortunately, not the reality. AI systems are massively expensive in terms of actual costs and time to build, train and deploy the system, employee training on system usage, and system maintenance once it has been deployed. That is why the majority of AI research is concentrated amongst a few large companies, their partner educational institutions, and governments. No one else can afford to keep up. Given the limited funding allocated to state court systems, the full-scale use of AI in the legal system is a pipe dream at best.

But technology is bigger than AI, and non-AI solutions are often more effective. Why use a grenade when you can use a chisel? AI is a massive shiny object that has distracted us from realistic technological advances that can actually help us achieve our accessibility goals. The access to justice conversation should focus on first determining the specific areas where costs are a barrier and then looking to address those areas with targeted attainable solutions.

For example, getting initial Chapter 7 bankruptcy forms filed is a complex process, and legal assistance is expensive. The lawyers behind Upsolve identified this area of need and addressed it by creating software to generate the forms for attorney review. Immigration is an expensive area of law and many people delay or neglect to apply for citizenship because of an inability to afford an immigration attorney. Probono.net identified this area of need and addressed it by creating a website that guides people through the citizenship application (think TurboTax) and lets them know if they will require further expert assistance. Both technological solutions increased access to the legal system, neither requiring the use of AI.

 

How far away do you think we are from this technology?

 Much of the work in utilizing technology to improve access to justice is focused on improving the legal system itself. The technology for these improvements already exists and is being used.

Making the legal system more efficient in how it collects and transmits data, helping intake officers perform more efficiently through introducing electronic intake forms, making courts more efficient through online filing, data more available through electronic discovery, historical sentencing data easily accessible for prosecutors and defense counsel, and courts more available through online conferencing technology. These are all demonstrably beneficial efforts, utilizing currently existing technology.

The gap occurs when we try to expand our efforts to the citizenry. Any effort to make systems accessible through technology relies on people having access to the technology necessary to access the systems. Many of us have recently discovered that the digital divide is a very real issue across the country, Florida is no exception. Many of us have also been directly affected by the legal and technical issues with moving some legal proceedings online.

Technological advances are limited by the technological access of the citizenry. If people don’t have access to the internet, high speed or otherwise, none of this works. The digital divide will widen instead of narrow, and access will become more limited. The legal profession has a duty to move at or behind the pace of technology to ensure that we do not leave citizens behind in our quest to grant them greater access.

 

What do you want the legal community to understand and look for with regard to AI?

 I want us to understand that we have power. The Loomis decision gave us the ability to use predictive algorithms in determining criminal sentences, but it didn’t mandate their use. There seems to be a belief that everything that is allowed in the law is required.

The legal community is not required to use any technology that does not meet the standards of justice, fairness, and equality that we hold ourselves and our profession to. And while the promise of AI is that it will meet those standards at some point in time, for the most part, it doesn’t meet those standards now. So, what’s the rush? 

There are two categories of AI that we in the legal system need to be aware of. One is the AI that takes advantage of superior compute power to accelerate data insights and make systems more efficient. This category of AI holds great promise for the law and can be of benefit to us now.

The other category of AI is focused on uncovering historical patterns in the data and using those patterns as “predictors” of future human behavior. This category of AI poses great danger for the law as it is steeped in historical biases and dodgy behavioral science concepts. I’d like the legal community to understand that we have the power to decide if we want both, either, or neither of these classes of AI in the legal system. And that we can decide to take, leave, or modify any application in either class should we find it in the interest of justice and the citizenry to do so.

Basically, I’d like us to remember that lawyers set the rules for the legal system, to look for groups who are having discussions about these issues, and to get involved.

 

How can attorneys and judges know when AI has played a role in the facts of a case?  What legal training should the state bar associations be implementing?

One of the main difficulties with AI systems is transparency, both internal and external.  AI models don’t provide explanations of their decision-making calculations and people often don’t know that AI was even involved in the decision-making process.

For example, AI is exacerbating housing bias. but potential tenants often are unaware that an AI system rejected their application. So, how would they know that they should sue, and who they would sue, for housing discrimination? If a patient dies because their blood oxygen level fell too low but the AI used to guide hospital decisions was given biased readings from the pulse oximeter, is there a cause of action for malpractice? And, if so, against who?

With algorithms becoming increasingly pervasive in every aspect of our lives, there’s no real way to provide training for every situation in which AI may play a role in the facts of a case. So, the focus has been on making lawyers and members of the judiciary aware of what AI is and how it works. 

I recently attended The Athens Roundtable on AI and the Rule of Law where they had a breakout session titled AI and Judicial Education. In the session, they discussed UNESCO’s effort to build an online course to provide members of the legal system with a basic foundation on AI technology and an understanding of its place in and effects on society, and the AI tools that have been created within the legal domain. The course is still being drafted but the proposed syllabus is strong and the course creators are very knowledgeable. I think that course can serve as a model for bar associations going forward. 

The ABA recently published an AI-focused volume of the Judge’s Journal that addresses several of the topics that we discussed today. But I think the most effective form of education will be CLE’s. The ABA has some promising offerings, and the Florida Bar has two courses offered through the Tax and Small Business Sections. I believe the most effective method would be to create foundational AI CLE courses for the Florida Bar, possibly using the UNESCO effort as a template.

 

What do you want the general public to understand about artificial intelligence and equality? 

Equality is not a given in AI. It’s not a concept that the programs understand and not a principle that its designers fully appreciate. If we want AI systems to be unbiased, to treat us fairly in society and under the law, then we will have to push for it. Hard. 

The technical work in this area is difficult, and the companies and educational institutions with the power to do deep research in these areas have very little incentive to do so. If you’ve seen The Social Dilemma or Coded Bias, then you’re actually at the forefront of understanding the damage AI is doing to our society in the absence of critical conversation, regulation, and legal liability.

The conversations about these issues are all new. You have not been left behind, there is plenty of time for you to make a difference. Finally, be not discouraged. These issues are not too big for any of us, and they will take all of us to solve. You too can help prevent this forest fire :). 

Join a conversation. Get involved.

 

The above is intended to inform firm clients and friends about recent developments in the law, including analysis of statutes and new case decisions. This update should not be construed as legal advice or a legal opinion, and readers should not act upon the information contained herein without seeking the advice of legal counsel.

  • Schedule a Consultation

    Looking for a real estate, business law, or corporate law attorney? Need better value and results? There is an alternative to the traditional law firm.

    Discover the difference — Massey Law Group.

  • Schedule Consultation