This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 6 minute read

California’s AI safety and transparency bills reach watershed moment

Birthplace of some of the biggest players in the AI revolution, California edges closer to approving its contentious AI safety and transparency regulations. 

The state legislature has overwhelmingly passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Bill (SB 1047), which aims to ensure large-scale artificial intelligence systems are safely developed. At the same time, another key AI-related legislation, the Generative Artificial Intelligence: Training Data Transparency Bill (AB 2013) has also been passed. 

Now, the spotlight shifts to California Governor Gavin Newsom, who must decide this month whether to veto or sign into law each of these landmark bills.

Response of the tech industry and Democrats

SB 1047 has attracted media attention after a number of Silicon Valley tech leaders voiced their concerns over the proposed new rules, and their perceived negative impact to the tech industry. Open AI has argued that the bill would “stifle innovation and push talent out of California”. State Senator Scott Wiener has been quick to push back saying that “makes no sense” given that SB 1047 is not limited to companies headquartered in California” and, rather, affects all AI model developers that do business in California and meet certain size thresholds.

In addition to push-polling of Big Tech executives, one of the most interesting developments has been the response of several Democratic members of the US Congress, who have weighed in behind Silicon Valley by writing to Governor Newsom, urging him to veto the bill.  Speaker Emerita Nancy Pelosi has also issued a statement in opposition.

The discussion comes at a time when governments around the world are striving to find a balance between mitigating the potential risks posed by AI technology, and avoiding introducing burdens that stifle innovation.

SB 1047 focuses on powerful AI models

Unlike the EU AI Act, which sets a common framework for the use and supply of AI systems, only the largest models that meet certain computing power and monetary thresholds would be required to comply with the stringent obligations introduced by SB 1047. 

With the advent of GenAI and the widespread access to the technology by the general public, powerful AI models capable of performing a wide range of tasks have come under the regulatory spotlight. These advanced models could be used to manipulate or harm individuals, disrupt important services or threaten national security. Some of the risks include:

  • Disseminating misinformation: GenAI could be used to generate highly realistic deepfakes or manipulate official information to influence public opinion, particularly on important events such as elections.
  • Conducting cyber-attacks: AI and GenAI could help cyber criminals to easily prepare attacks on critical infrastructure e.g. by identifying and exploiting existing weaknesses that are difficult to detect or reducing the time necessary time to prepare cyber-attacks by automating repetitive tasks.
  • Perpetrating fraud or scams: AI could enable criminals to perpetrate fraud or to use techniques to perpetrate scams by creating fake content - such as deepfake videos or audios of celebrities to promote investment scams. 
  • Engineering of dangerous biological weapons: GenAI’s capabilities could help malicious actors to develop biological or chemical weapons or assist with the panning of attacks.

California’s SB 1047 seeks to tackle these risks by introducing a set of obligations on developers before and after the AI model goes live. 

Key provisions of SB 1047

In scope AI models: SB 1047 defines “covered models” by referencing the amount of computing power used to train the model and specified cost thresholds, which is assessed by the developer of the training model.

Before 1 January 2027
Covered model
  • Exceeds $100 Mln in computing power to train 
  • Quantity of computing power used is greater than 10^26 integer or floating-point operations
Fine-tuned covered model
  • Exceeds $10 Mln in computing power to train 
  • Quantity of computing power used is equal to or greater than three times 10^25 integer or floating-point operations
On and after January 2027
Covered model
  • Exceeds $100 Mln in computing power to train 
  • Quantity of computing power is determined by the Government Operations Agency 
Fine-tuned covered model
  • Exceeds $10 Mln in computing power to train 
  • Quantity of computing power used exceeds a threshold determined by the Government Operations Agency

Obligations before the training: Under SB 1047, developers of covered AI models would be required to undertake actions before the training model starts, including implementing:

  • cybersecurity protections, which must be “reasonable” and “appropriate in light of the risks” and which must contain administrative, technical, and physical safeguards to prevent unauthorized access to, misuse of, or unsafe post-training modification of, the covered model 
  • capability to enact full shutdown and take into account the risks this could pose to critical infrastructure
  • written and separate safety and security protocol and ensure that it is implemented as written
  • written safety and security protocol.

Obligations before the model is used/released: Additional obligations would be placed on developers before a covered model or covered model derivative is used or made available for use, including:

  • assess whether the model is reasonably capable of causing or materially enabling a critical harm; 
  • record and retain specific tests and test results used in the assessment 
  • take reasonable care to implement appropriate safeguards to prevent critical harm
  • take reasonable care to ensure that critical harms stemming from the covered model’s action can be accurately and reliably attributed to them.

Statement of compliance: Covered developers would be required to submit to the California Attorney General an annual statement of compliance with the above requirements.

Obligation on computing cluster operators: The bill defines the computing cluster as a “set of machines transitively connected by data center networking of over 100 gigabits per second that has a theoretical maximum computing capacity of at least 10^20 integer or floating-point operations per second and can be used for training artificial intelligence”. Operators of such computing clusters would be required to implement written policies and procedures that would apply to customers using compute resources sufficient to train a covered model. 

Whistle-blower protections: The bill introduces certain whistleblower protections for employees and would impose obligations on the developer, or their contractor or subcontractor, not to prevent employees from disclosing information to the Attorney General or the Labor Commissioner, or retaliating against an employee for disclosing information. 

AG’s powers for non-compliance: The Attorney General would be charged with the enforcement of the requirements introduced by the Bill and would be authorized to bring a civil action in case of non-compliance, which might include actions directed at recovering, for example, monetary and punitive damages, or civil penalties. 

Key impacts of SB 1047 on the AI industry 

  • The bill would directly impact the largest AI companies that have the resources to develop advanced AI models meeting the bill’s computing power and monetary thresholds, leaving out smaller and newly established startups from direct regulation.
  • The computing power threshold, although initially set by law, could later be modified as technology advances and current levels of computing powers are surpassed.
  • The bill’s territorial scope could reach beyond state borders and apply to any developer doing business in the state, even if the covered model was developed and trained outside of California.  
  • Some companies already have to comply with the EU’s AI Act, which defines GPAIs with systemic impact in a similar fashion and imposes stricter obligations not only on developers, but also distributors and deployers. Some of these companies might already be preparing for AI Act compliance and might already be partially compliant with the bill’s provisions if the bill comes into force.

Under the radar: GenAI Transparency Bill 

Overshadowed by the SB 1047, another crucial AI related legislation, the Generative Artificial Intelligence: Training Data Transparency Bill (AB 2013), has also made it to the Governor’s desk. 

With a narrower scope to the SB 1047, the AB 2013 focuses only on GenAI systems released on or after January 1, 2022. If approved, it would require developers of GenAI systems and services to make publicly available on their websites information about the datasets used to train, test, and validate their models, providing consumers insight into how these systems are trained (including a high-level summary of the datasets used in the development of the GenAI system or service, such as the sources or owners of the datasets and description of how the datasets further the intended purpose of the AI system or service). 

There are exemptions to the disclosure requirement, for example for where the GenAI system or service is solely used in the operation of aircraft in the national airspace, or when made available only to a federal entity for national security, military, or defense purposes.

Looking ahead 

Both bills have been sent to the desk of Governor Gavin Newsom.  In the meantime, the attention - of the tech sector and the media - is focused on the fate of SB 1047. Governor Newsom has until September 30 to sign SB 1047 into law. If enacted, SB 1047 would directly affect the largest tech companies developing in-scope AI models, even if such models are developed outside of California, leaving a lasting impact of the future of AI.

Regardless of whether Governor Newsom signs or vetoes these proposed laws, companies and model developers would be wise to familiarize themselves with their substantive requirements. A regulatory blueprint drawn by California lawmakers could be adapted and rolled out in other state capitals, or in Washington, D.C., as regulators and legislators across the country continue considering how to regulate AI.

Going forward, it will be interesting to see whether the states will continue to lead the charge when it comes to AI legislation – and how the tension that clearly exists with the federal government in this regard will play out.

Subscribe to our Tech Insights blog for insights, updates and news from our experts - subscribe now!

Tags

ai