Some non-rhetorical questions for laws like SB 1047
The first wave of AI-specific regulations like California’s proposed SB 1047 raises the questions of when, and how, you should regulate a nascent industry like AI. A common justification for passing laws right away is that without laws on how AI should be developed or how liability should be apportioned should things go wrong, AI will run wild and cause untold harms that wouldn’t have come to pass with regulations in place. Therefore, there’s no time to waste in passing laws regulating the AI industry.
This framework rests upon a couple of assumptions that we think may not be supported: (1) that we currently know enough about the AI industry and its growth to effectively regulate it, and (2) that in the meantime developers face no constraints on their behavior. An examination of SB 1047 helps make clear why these assumptions may not be correct, and therefore why regulators would be better served waiting to enact AI regulations.
Is SB 1047 accurately targeted enough to be light touch?
SB 1047’s drafters state that the bill is light touch and will only regulate the biggest models in the industry, which are defined as either:
- An artificial intelligence model trained using a quantity of computing power greater than 10^26 integer or floating-point operations, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market prices of cloud compute at the start of training as reasonably assessed by the developer.
- An artificial intelligence model created by fine-tuning a covered model using a quantity of computing power equal to or greater than three times 10^25 integer or floating-point operations.
Proponents of the bill point to the fact that no models currently in existence reach these high thresholds as evidence that the bill is light touch. But clearly the bill isn’t meant to regulate nothing, and so the true measure of these thresholds is not how many models they regulate today but how many they will come to regulate in the next few years. If these thresholds mean that SB 1047 is only applied to the largest models in the industry, then the bill may well turn out to actually be light touch.
But as opponents of the bill have pointed out, the rapid development of the AI industry means that all models, not just the largest ones, may quickly surpass SB 1047’s compute threshold. Further, given the massive costs that go into training AI models, as well as confusion over how exactly the one hundred million dollar threshold is calculated, companies may hit (or think they’ve hit) this threshold much quicker than regulators expect. And even if they don’t, the specter of liability may force them to comply with SB 1047’s requirements if they are arguably close to the dollar threshold to avoid any doubt about liability. If the industry develops as its opponents believe, SB 1047 will start its life regulating nothing before quickly regulating everything, and never make good on its promise to be light touch.
But even assuming that SB 1047 could accurately target only the largest models as intended, are we sure that these models are particularly problematic? Could the fact that these models are created by the largest companies, with the most resources dedicated to things like cybersecurity or safe AI policy research, actually cause them to produce the safest and most effective models? And what if the biggest sources of consumer harms aren’t the large models themselves, but rather the downstream actors who are using them nefariously? Depending on how questions like these play out we may think it makes sense diverting regulatory resources away from the biggest models towards other causes or actors.
Not only might larger models end up being safer, they may also tend to be less common than we might expect. There is already a growing appetite for smaller models which use less compute and can therefore do handy things for consumers like run on a phone. If smaller models are not only better from a business perspective, but also face less regulatory scrutiny, we might see an accelerated shift towards these smaller models which could further reduce the effectiveness of bills like SB 1047.
Obviously there are a lot of questions that need to be answered here, but that is our whole point. Starting with regulations before allowing an industry to naturally develop can lead to regulations that don’t align with how companies are creating products or how consumers are using them. This can not only stunt innovation and development, it can also focus regulatory efforts away from real problems towards perceived ones, harming both businesses and consumers.
And while regulations can always be updated, it’s fair to question how quickly and accurately regulators can do this. Rather than set a high limit and hope the industry grows into, it might make more sense to watch the industry develop before then deciding what the industry’s problems look like, and how best to regulate them.
What’s the rush anyway?
You can imagine a proponent of SB 1047 responding that sure, maybe the specific limits are a bit arbitrary, but time is of the essence and we need regulations on the books now. Without regulations in place to guide how companies build their models or what liability they should face if they cause harm, the AI industry will run wild and cause unimaginable problems. But this argument ignores the liability and legal regimes that AI developers already face.
You don’t have to look very far to see the healthy diet of litigation against companies developing and using AI in areas as diverse as privacy, IP, and employment law. Companies don’t get to violate old laws just because they are using new technologies, and these laws that are already on the books should give us comfort that AI developers aren’t building without any regulatory guardrails.
Along with the myriad laws that AI companies must already comply with, courts can also hold AI developers liable under a variety of flexible tort law theories. Like any other company, an AI developer could be found liable if it negligently produced a good that caused harm. It isn’t very likely that a company will be able to walk away from negligently harming consumers by stating that the AI did it, not them.
Of course the courts aren’t perfect. They can be slow moving, and it can sometimes take decades for case law to develop. But courts have been able to use existing legal theories to protect consumers from new technological harms throughout American history, and we don’t yet have any reason to believe that can’t happen with AI. And while we will probably learn along the way all the reasons why common law theories don’t adequately constrain AI developer behavior, this is a feature of the common law not a bug and will help to surface promising areas of future regulation. In the meantime, AI developers have plenty of liability to worry about and courts have plenty of tools to enforce this liability.
Delayed regulation is better than bad regulation
Regulation isn’t fundamentally good or bad. In some cases, regulations provide huge social utility because the parties, when unregulated, are not incentivized to consider the harms created by their actions. In other cases, too much regulation or poorly written regulations can stifle the development of innovative products that would have created untold social utility had they been allowed to exist. What matters is that regulations are properly tailored to what they’re regulating.
In the case of AI, it might take a little more time to find out exactly what this tailoring looks like. AI as a consumer product is still in its earliest innings, and the problems associated with its use may end up looking quite different in two or three years should AI continue its rapid commercial development. If this is the case we might really regret setting up regulatory regimes based on what we thought the problems with AI would be, instead of waiting to find out what they really are.