After weeks of uncertainty and contention, the spotlight has shifted away from California’s AI safety bill: SB 1047.
Following strong opposition from Silicon Valley and even some political heavyweights from his own political party, California Governor Gavin Newsom has vetoed the bill which would have required developers of large-scale Al models to comply with a set of stringent obligations and made California a pioneer in not just AI technology, but also in AI rulemaking.
While the much-anticipated September 30 deadline saw the governor’s veto of SB 1047, 17 other AI bills were signed into law, cracking down on deepfakes, requiring AI watermarking, protecting children and workers, and combating AI-generated misinformation. Further, Governor Newsom’s recently announced initiatives to advance safe and responsible AI underscore that the pressure on developers and deployers of AI tools remains high.
Why did SB 1047 get vetoed?
From the outset, SB 1047 was met with stark opposition from academics, political leaders, and many business leaders in the tech sector, with Open AI arguing that the bill would “stifle innovation and push talent out of California”.
In his veto message, Governor Newsom highlighted the importance of finding “a delicate balance” when it comes to regulating a technology still in its infancy. He expressed concerns over the bill’s focus only on the largest AI models, leaving out other specialized, potentially more dangerous models that don’t trigger the cost and computing power thresholds. He also argued that introducing strict standards uniformly, regardless of whether the AI system is deployed in high-risk environments or involves critical decision-making – or is a lower-risk deployment – does not strike the right balance between protecting the public from the threats of AI technology, on one hand, and realizing the promise of AI technology, on the other.
In rejecting SB 1047, Governor Newsom also laid out his vision for the key elements that must inform a new regulatory framework that addresses the novel risks posed by AI. Governor Newsom emphasized that he agrees with the bill’s author, Senator Scott Wiener, on the need to adopt safety protocols and implement proactive guardrails against AI generated risks, but stressed that regulation must be based on empirical evidence and science.
Importantly, his recently announced initiative to collaborate with leading AI scientists to develop responsible guardrails for the deployment of GenAI signals that California’s regulatory efforts in addressing the risks posed by AI are far from over.
Beyond the finish line
While SB 1047 was returned to the legislature without a signature, Governor Newsom did sign several other key AI bills, including
- Generative Artificial Intelligence: Training Data Transparency Bill (AB 2013), which will require developers of GenAI systems and services to make information about the datasets used to train, test, and validate their models publicly available on their websites [read more].
- Senate Bill 942 (SB 942), which establishes the California AI Transparency Act and requires providers of GenAI systems with over 1 million monthly users to make available to the public tools which are able to identify GenAI content produced by their systems. Providers will also be required to offer users the option to include provenance disclosures in the content created or altered by the GenAI system offered. The Act introduces fines of up to $5,000 per violation.
- Assembly Bill 2885 (AB 2885), which inserts a single, uniform definition for AI in California law. Drawing from the definition developed by the Organisation for Economic Cooperation and Development (OECD), this bill aligns California’s definition of AI with other major definitions such as the EU’s AI Act.
Additional bills, aimed at tackling emerging challenges posed by GenAI, such as deepfakes, combating AI-generated misinformation, and protecting children were also enacted:
Privacy | |
AB 1008 | Clarifying that personal information under the California Consumer Privacy Act (CCPA) can exist in various formats, including information stored by AI systems |
Child safety | |
AB 1831 and SB 1381 | Expanding the scope of existing child pornography statutes to include matter that is digitally altered or generated using AI |
Online safety | |
SB 981 | Requiring social media platforms to establish a mechanism for reporting and removing sexually explicit digital identity theft |
Entertainment and media industry | |
AB 1836 | Concerning the use of deceased performers’ digital replicas without prior consent of the performer’s estate |
AB 2602 | Requiring a performer's informed consent and proper representation in executing a contract for any transfer of rights of that individual's likeness or voice |
Elections | |
AB 2355 | Disclosure of the use of AI in political ads |
AB 2655 | Removal AI generated political content from online platforms |
AB 2839 | Distribution of AI generated political ads |
Looking ahead
Governor Newsom’s veto of SB 1047 could still be overridden by a two-thirds vote in both houses, however this appears to be highly unlikely. Even though the spotlight has shifted away from California’s SB 1047 for the time being, this might only be temporary. A new and reformed AI safety bill - modified to address the concerns described in Governor Newsom’s veto message - could very well make its way onto the governor’s desk, and next time there could be a different outcome.