A win for regulatory restraint
Earlier this month the Fifth Circuit Court of Appeals rejected a proposed rule that would have required counsel and unrepresented filers to certify whether they had used generative AI in drafting their submissions, and if so, to certify that all text, citations, and legal analyses were reviewed for accuracy by a human. In rejecting this proposed rule, the Fifth Circuit stated that it was choosing “not to adopt a special rule regarding the use of artificial intelligence in drafting briefs at this time”, noting that parties are already required to ensure that their filings are truthful and accurate. Therefore, it is already the case that “I used AI” is not an excuse for an “otherwise sanctionable offense.”
While the rejection of a proposed circuit rule of procedure might not normally be the biggest news, we thought it was worth calling out as it stands as a great example of what we were discussing last week, namely that just because AI is new, that doesn’t mean that we automatically need new laws to regulate it. The Fifth Circuit’s rejection of this proposed AI rule is a perfect example of this phenomenon.
As we discussed, it’s certainly rational for courts and policymakers to worry about how AI’s flaws, like its tendency to hallucinate nonexistent laws, might negatively effect things like the speedy adjudication of justice. But failing to check a brief for truthfulness and accuracy is already banned, whether the brief was written by a hallucinating junior associate or a hallucinating LLM. Making lawyers attest to whether they used one tool, generative AI, as opposed to any other tools (whether online like databases or offline like treatises) doesn’t seem to have much bearing on whether lawyers are properly proofreading the final product.
While this might seem like a minor point (what’s so hard about making a lawyer check another box, they love more paperwork), it makes a lot of sense to hold off on regulating AI until we have a better understanding of how exactly AI is being used (and misused) and to what extent our current laws fail to address these potential use cases and harms. Before policymakers have a better understanding of how AI is effecting real world institutions, it’s hard to know if a regulation is even called for at all, and if so, what it should look like. In the current case, until we have a better understanding of what generative AI is doing to the quality of appellate brief writing, and remember it could be improving it, there’s no reason to do something that could potentially discourage its use.
This also stands as a great reminder of the broader point that before diving into crafting new regulations for AI, it’s worth understanding how existing legal frameworks are already regulating AI development. This discussion of tort law and AI governance, for example, does a great job showing how existing tort frameworks can already help fill regulatory gaps and incentivize companies to build safe AI products, while also allowing new technologies to flourish without too much regulation too soon. And while these existing frameworks are probably not the be all and end all for AI regulation, leaning on the laws that already exist can help assuage the fears of policymakers who worry AI might run wild before more fulsome regulations come into place.