While Artificial Intelligence continues to transform industries across the globe, legislators are still grappling with balancing technological innovation and public interest. California, a leader in technology, has become a central player in shaping the legal landscape governing AI in the United States. Recent legislative developments in the state underscore the complexity of AI regulation. 

In 2024, two significant AI-related bills made their way to California Governor Gavin Newsom’s desk: Senate Bill 1047 and Assembly Bill 2013; Newsom vetoed Senate Bill 1047 (“SB 1047”) and signed Assembly Bill 2013 (“AB 2013”) into law. These legislative actions illustrate how lawmakers protect the public from privacy harms while encouraging innovation in emerging spaces like AI.

SB 1047: The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

SB 1047, introduced by California State Senator Wiener, addressed a critical gap in AI regulation: consumer transparency and accountability for businesses using AI in high-risk applications. The bill was one of the first significant AI regulations that placed liability on developers and required substantial transparency in the training process. SB 1047 included considerable governance and reporting requirements that companies would have to meet before beginning to train certain models. The bill also required a “kill switch” that would shut down all models when disruptions to critical infrastructure occurred. 

The bill greatly divided Silicon Valley. Many powerful players in the AI field expressed opposition to the bill by emphasizing their view that the regulations would cause major damage to innovation. In one letter sent to Senator Wiener, OpenAI’s Chief Strategy Officer Jason Kwon wrote that the bill will threaten “growth, slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere.” 

Other tech players, including prominent AI experts Yoshua Bengio and Geoffrey Hinton, supported the bill because of its focus on safety in AI development. They argued that “it is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks.”

Governor Newsom ultimately vetoed SB 1047. In his veto statement, Newsom acknowledged the importance of safety protocols for AI. However, he ultimately vetoed the bill on account of its narrow scope that “applies stringent standards to even the most basic functions—so long as a large system deploys it.” This reflects the difficult balancing act that legislators face in weighing the protective scope of AI regulations against their potential to hinder innovation.

AB 2013: Artificial Intelligence Training Data Transparency

Unlike SB 1047, AB 2013 was signed into law. AB 2013 represented a more targeted approach to AI regulation, focusing on privacy and data transparency. AB 2013 clarified existing privacy laws to ensure that AI developers adhered to strict data protection standards, particularly in data-sensitive sectors. For example, AB 2013 requires large language model developers to provide high-level summaries of their training data. In contrast to SB 1047’s broad and burdensome governance requirement, AB 2013’s focus on training data privacy was a substantial factor in its success.

California’s recent legislative activities provide a glimpse into the expansive challenges that lawmakers will face in developing workable AI regulations. On one hand, there is an urgent need for laws that hold AI developers and technology companies liable for their creations’ potential harms.  On the other hand, overly strict regulations could hinder developers’ ability to innovate, which is critical to the technology sector. 

What Comes Next?

California’s recent legislative actions on AI regulation mark an inflection point in the ongoing efforts to regulate artificial intelligence. With SB 1047 vetoed, a significant gap remains in establishing liability for AI companies. While AB 2013 provides important safeguards, more work is needed to create a comprehensive and balanced regulatory framework for AI—at the state, federal, and even global level.

Internationally, there have been differing approaches to AI governance. The European Union’s AI Act takes a risk-based approach, classifying AI systems by their potential harm and regulating them individually. China has implemented strict regulations on AI use in surveillance and state-controlled contexts. The United States, by comparison, has been relatively slower to adopt AI regulations, with California leading the way. 

Finding the right balance will be a long and iterative process. As AI evolves, lawmakers should adjust their approach in consideration of both societal risks and economic opportunities. Moreover, the global nature of AI development and concentration of big AI companies in California means that California’s actions could have a ripple effect, influencing AI regulation in the U.S. and beyond.