Algorithm aversion in the legal industry

People show aversion to algorithms now, but that attitude might be changing

In a recently released discussion draft, Cass Sunstein and Jared Gaffe explore the concept of “algorithm aversion,” defined as the tendency for people to “prefer human forecasters or decision-makers over algorithms even though [] algorithms generally outperform people in the general domain or in the specific task.” The authors attribute this phenomenon to a variety of factors, most prominent being:

  • A desire for agency;
  • Moral or emotional qualms about judgment by algorithms;
  • A belief that certain experts have unique knowledge, unlikely to be held or used by algorithms;
  • Ignorance about why algorithms perform well; and
  • A larger negative reaction to algorithmic error rather than human error.

In reading the paper (which you should check out too if for no other reason than its thorough bibliography of research on algorithm aversion) we thought it presented a great opportunity to think about some concrete lessons for lawyers using AI, legal-tech start ups, and policymakers regulating AI systems.  

A desire for agency

A primary factor informing how people think about algorithmic decision making comes from the desire for agency. The desire for agency can manifest for a variety of reasons, from the belief that choosing has intrinsic value to people wanting to be solely responsible for the outcome of their decisions whether good or bad. But at a high level, the desire for agency is motivated by the fact that “the act of choosing fulfills a desire for sovereignty over one’s own life and generates utility to that person independent of the outcome of their decision.” The desire for agency can create both algorithm aversion and attraction depending on whether someone wants to be responsible for the given decision or not.

This calculus might come into play in choosing a vacation, for example. While an algorithm might be able to chew through more information and plan a “better” trip, some people may want to do it themselves because for them the fun is in the planning, while others may simply believe that they know themselves “best” and therefore can plan better than any algorithm.

In the legal realm, lawyers and their clients make countless choices every day whether they are litigating a case or putting together a deal. In building AI products to help with these tasks, it will be necessary for companies to understand the extent to which decision-makers want to make these decisions themselves. Some aspects of a case, say lost wages calculations in a personal injury case, might be more rote and something litigants are happy to pass along to software, while other questions, like will this line of questioning appear too aggressive to the jury, might be something that lawyers may still want maximum control over. Understanding when and where lawyers and their clients want agency will be vital for building high leverage and beloved AI-powered legal products.

Moral or emotional qualms

A different form of algorithm aversion comes from the belief that certain decisions simply must be made by humans. This often occurs when the decision involves particularly grave or emotionally charged stakes. This form of algorithm aversion is particularly important when considering AI’s impact on the legal system, which relies on humans to make decisions based on the full emotional and moral context of the situation, like whether an individual should be released on bail or the length of their ultimate sentence.

Sunstein notes that these qualms can lead to a “tragedy of algorithm aversion” where optimal algorithmic solutions are shunned because they “feel” wrong. Returning to the example of a judge setting a defendant’s bail, you can imagine a situation where an algorithm might be better at predicting the likelihood of a defendant being a flight risk, but where its use is still rejected because people feel fundamentally uncomfortable with an algorithm with no empathy or emotions determining whether a person is incarcerated.

A clear takeaway here is that policymakers should try and avoid the “tragedy of algorithm aversion” and ensure that AI systems are not rejected simply because the decision feels like one humans should make. While it might feel more intuitively comfortable for morally weighty decisions like sentencing to be made by a human because of their ability to understand the emotional or moral context of the decision, there are some that would tell you that the human tendency to overvalue moral or emotional considerations is the exact reason why such decisions should be, if not taken out of human hands, at least constrained by some empirically grounded system.

The point here is not to say that algorithms either should or shouldn’t be used for morally weighty decisions. Whether algorithmic decision-making is appropriate for a given circumstance, like sentencing, will always be highly context dependent. Rather, the takeaway is that policymakers should fully explore if the decisions currently being made by humans might be better made, or at least better informed, by algorithmic systems, and not just default to our traditional beliefs that some decisions simply must be made by humans. Being aware of this potential blind spot, where we assume some decisions must be made by humans, can help policymakers come up with more comprehensive and impactful regulations that produce more just and fairer outcomes.

Unique expert knowledge

A common form of algorithm aversion in the legal profession occurs where individuals with particular expertise are unwilling to believe that algorithmic solutions can match their experience. This is often grounded in an emotional reaction due to the fact that a person’s status as an expert can be a fundamental part of their identity and any attacks, whether perceived or real, on this identity can be met with firm resistance.

The obvious first takeaway for lawyers is to not let an emotional reaction prevent adoption of a legitimately useful tool. While it’s still to be seen how much of a lawyer’s job is ultimately done by AI, there are already enough legitimately useful AI applications that lawyers are seriously limiting themselves by not engaging with AI.

For startups selling to law firms it will be important to understand how the lawyer’s status as an expert can affect the sales and adoption process. Lawyers are likely going to respond best to pitches explaining how AI will augment their work, not replace them entirely. And while this shouldn’t be taken as a sign that lawyers are completely opposed to AI, I speak with lawyers every week who can’t wait to learn more about all the AI products I am tracking, the best AI products (and the smoothest pitches to lawyers) will be sensitive to how lawyers and their clients view a lawyer’s expertise.

Ignorance about why algorithms perform well

Another source of algorithm aversion is misunderstanding how the algorithm operates. People generally prefer human advice as it feels “tangible and traceable,” meaning that people can more easily evaluate the source of information when it is human as opposed to algorithmic. Sunstein cites studies showing that allowing users to see into an otherwise black box and understand  why the algorithm acts as it does greatly increases trust in the algorithm.  

For startups, it’s an important reminder that a user’s algorithm aversion can be overcome and that explaining how the algorithm works is one powerful way to do so. This will not only be important in designing products that lawyers and their clients love to use, it will also be especially important when selling to lawyers. Lawyers are (rightly) going to question whether the highly specialized knowledge and analytical skills they’ve developed over the years can so easily be replaced, and will want to understand how and why your AI system is coming to the conclusions it does. For a profession so steeped in the Socratic method, being able to drill down into how exactly your system works will pay big dividends in driving lawyer adoption.

Larger negative reaction to algorithmic error

Sunstein also cites studies providing evidence that people are more forgiving to human error than to algorithmic error. People know that people make mistakes, and are more willing to forgive humans like themselves for an occasional error. While people will return to a doctor or lawyer who made a simple mistake, people are much more willing to drop an algorithm after it hallucinates.

I bring this one up as it reminds me of a lot of conversations that I have had with practicing attorneys over the past year who quickly gave up on AI altogether after seeing it fail a single time in action. Time and again, lawyers would ask the system a complicated legal problem, get a less than perfect answer (admittedly sometimes far less than perfect), and then conclude that AI had nothing to offer lawyers. Never mind that the AI produced a directionally correct (or better) answer instantaneously from a simple natural language request. Never mind that junior associates who would traditionally do the same task sometimes make legal mistakes themselves. The result wasn’t perfect (which was never the standard to begin with), and while it would have been amazing for a human to instantly produce the answer in question, it was met with undue skepticism because it was produced by AI.  

This story doubles as a warning to both lawyers and product people. For lawyers, the main point is that AI is here to stay and making perfect the enemy of the good will only hurt your own practice in the long run. Many AI tools have only been out in the wild for a very short amount of time during which they’ve been able to make astounding progress. Don’t let some early hiccups stand in the way of some truly transformative tools.

For startups building for lawyers it’s another reminder that lawyers are often making high-stakes decisions that have a large impact on their clients and their businesses. Being under this high degree of pressure, and being expected to provide the results society demands of its experts, can cause lawyers to have a high bar for the tools they are using which is only exacerbated by underlying algorithm aversion to mistakes.

But maybe this won’t matter so much soon

While algorithm aversion is certainly a powerful phenomenon, we’re curious to what extent it is a lasting one. In our view, it is very likely that software without an AI component might soon feel like a completely antiquated experience. If AI becomes as commonplace as the internet or the telephone, we suspect that algorithm aversion will subside to some degree even as its contours continue to change.

Imagine an experience as simple as buying shoes online. Sure it might be easy enough to browse different shoe websites, read some articles about the best ones, and order a pair to your house with just a few clicks. But as easy as that seems there are still pitfalls with a human behind the wheel. Maybe you get lazy in your research and miss a great deal? Maybe you forget to check the right blogs and miss the latest fashions? But with an AI powered solution tracking down all the best options from across the internet on your behalf you would never have to worry about this again. In such a world, you might find yourself wondering how you ever went through all the effort of finding and buying shoes for yourself before, or why you were ever averse to algorithms to begin with.

Similarly for lawyers, it isn’t hard to imagine a world where daily lawyer tasks are being done at least partially by AI-powered systems. Algorithm aversion in the legal profession is likely to look way different in five years if, for example, it has become as common to run your brief through an AI-powered citation checker as it is to run it through spell-check. If algorithmic tools become as standard as the internet or a phone (or any technology that was once new and shiny and is now standard and boring), the contours of algorithm aversion will definitely change as people accept algorithmic-based solutions as part of their day-to-day.

It may be a while before algorithm aversion completely fades, fundamental human truths like a need for agency or pride in our own expertise are unlikely to go anywhere, and so understanding how human views of algorithmic decision making change over time will be vital for startups to produce the best AI-powered solutions, and for lawyers and their clients to make the most of them.