Article
California’s AI Safety Bill Veto: The Path Forward
Article
October 16, 2024
This article was originally published in Law360. Reprinted with permission. Any opinions in this article are not those of Winston & Strawn or its clients. The opinions in this article are the authors’ opinions only.
On Sept. 29, Gov. Gavin Newsom vetoed California's S.B. 1047, a bill that sought to impose stringent regulations on the development and deployment of advanced artificial intelligence models.
Both the bill and Newsom's decision have sparked a renewed debate on how best to balance innovation with safety in the rapidly evolving AI landscape.
Newsom's Rationale
Newsom's veto came amid intense discussions about the potential risks and benefits of advanced AI technologies.
In his veto message, he acknowledged the importance of regulating AI to prevent harmful outcomes but expressed concerns that S.B. 1047 might stifle innovation and competitiveness. He emphasized the need for a more nuanced approach that addresses immediate risks without imposing burdensome requirements on developers. Newsom stated that:
While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.[1]
Implications of the Veto
Short-Term Implications
The immediate consequence of the veto is that AI developers in California will not be subject to the stringent requirements outlined in S.B. 1047, relieving companies — especially smaller startups — from the financial and administrative burdens associated with compliance, at least as far as S.B. 1047 is concerned.
However, companies will still face a patchwork of laws across other states and jurisdictions which may impose their own regulatory frameworks. Additionally, some of the 18 AI-related regulations enacted in Newsom's most recent legislative cycle may still require an effort to ensure compliance, depending on the nature of the AI product or service.
Though Newsom's statement acknowledges the need for "proactive guardrails,"[2]his critique of the bill also strongly suggests that S.B. 1047 is unlikely to serve as a blueprint for future regulation, pointing to its structural flaws, such as its lack of adaptability and failure to account for actual risks posed by smaller, more specialized models.
Newsom undoubtedly recognizes that California has a role to play in regulating AI, stating that "California is home to 32 of the world's 50 leading Al companies," meaning that whatever steps California does take in the months ahead will likely still serve as a model for other states simply by virtue of its influence in the industry.
As California shapes its approach, though, the absence of state-level regulation in the interim leaves a gap in oversight, which could lead to increased public concern over AI safety and ethics. Companies may face pressure from consumers and advocacy groups to self-regulate and adopt more stringent voluntary standards to demonstrate their commitment to responsible and ethical AI development during this waiting period.
Long-Term Implications
The veto does not signify an end to AI regulation efforts in California. In fact, it opens the door for alternative approaches to emerge and for lawmakers to assess more options to regulate AI while providing more breathing room for innovation.
Newsom's statement echoes this sentiment, stating that "[a] California-only approach may well be warranted — especially absent federal action by Congress."
Further, one of the prominent proponents of S.B. 1047, California State Sen. Scott Wiener, D-San Francisco, bolstered the idea of California leading the charge, as Wiener put it, "particularly given Congress's continuing paralysis around regulating the tech industry in any meaningful way."[3]
A new bill might lower compliance thresholds to include a wider range of AI systems, as Newsom pointed out that focusing only on the most expensive and large-scale models could create a "false sense of security," while smaller, and possibly even more dangerous models, might go unchecked. And simultaneously, the bill could tailor the requirements to be more proportionate to the potential risks posed by each system, ensuring that the regulation is appropriate to the actual threat level posed by a given AI model or service, much like the European Union's AI Act.
It's also possible that, in the pursuit of greater flexibility, a future bill might encourage more public-private partnerships to develop industry best practices, rather than imposing unilateral mandates. Given Newsom's focus on the need for "an empirical approach based on evidence and science" in his veto letter, future legislation is likely to prioritize data-driven results.
Partnerships with leading AI scholars, such as Fei-Fei Li, and institutions like the U.S. Artificial Intelligence Safety Institute under the National Institute of Standards and Technology, are likely to play a pivotal role in developing industry best practices and assessing national security risks.
Additionally, a new bill could attempt to simultaneously encourage compliance and innovation by offering incentives, such as tax breaks, for companies that demonstrate adherence to regulatory standards. Future legislation may also place a stronger emphasis on issues such as data privacy, algorithmic bias and transparency, which are currently hot-button topics for regulators.
At the federal level, there are both positive and negative outlooks for whether this will spur any sort of comprehensive federal regulation.
On the positive side, the veto leaves the regulatory vacuum mentioned above, which the federal government may determine creates an opportune time to strike while the iron is hot, and forge ahead while building upon the purported mistakes of S.B. 1047, though it remains unlikely that this development meaningfully shifts expectations for U.S. Congress to get involved in the near future.
On the negative side, legislators may see the backlash and controversy that S.B. 1047 caused, and take the position that the regulation may have been premature in light of its requirements and the current state of the burgeoning AI industry.
Preparing for Regulation
With Newsom's clear support for AI regulation — albeit more narrowly tailored regulation — it would be wise for companies developing or deploying AI models to adopt a proactive approach for compliance ahead of broader AI legislation expected to arrive in 2025.
Simply reacting to enacted regulations can be costly and inefficient. Instead, implementing AI governance best practices today can provide a solid foundation for future compliance.
Key Practices for AI Governance
Some key practices to consider for AI governance are as follows.
Developing Robust Internal Policies
- Understand and potentially restrict or outlaw high-risk use cases;
- Train models with lawful and reliable data; and
- Limit the ingestion of sensitive private data into AI systems.
Proactive Monitoring and Auditing
- Conduct periodic monitoring and auditing of AI systems to ensure compliance and identify potential issues early; and
- Stay informed about developing AI regulations and consider participating in public-private partnerships to help shape and align with future best practices.
By taking these proactive steps today, organizations can better prepare for upcoming regulations, mitigate risks, and demonstrate their commitment to responsible and ethical AI development. Early preparation not only ensures compliance but can also provide a competitive edge when regulation inevitably arrives.
[1] https://www.gov.ca.gov/2024/09/29/governor-newsom-announces-new-initiatives-to-advance-safe-and-responsible-ai-protect-californians.
[2] https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf.
[3] https://sd11.senate.ca.gov/news/senator-wiener-responds-governor-newsom-vetoing-landmark-ai-bill.