California lawmakers voted to advance legislation Tuesday that would require artificial intelligence companies to test their systems and add safety measures to prevent them from being potentially manipulated to wipe out the state’s electric grid or help build chemical weapons — scenarios that experts say could be possible in the future as technology evolves at warp speed.
The first-of-its-kind bill aims to reduce risks created by AI. It is fiercely opposed by venture capital firms and tech companies, including Meta, the parent company of Facebook and Instagram, and Google. They say the regulations take aim at developers and instead should be focused on those who use and exploit the AI systems for harm.
Democratic state Sen. Scott Wiener, who authored the bill, said the proposal would provide reasonable safety standards by preventing “catastrophic harms” from extremely powerful AI models that may be created in the future.
The requirements would only apply to systems that cost more than $100 million in computing power to train. No current AI models have hit that threshold as of July.
Wiener slammed the bill’s opponents’ campaign at a legislative hearing Tuesday, saying it spread inaccurate information about his measure. His bill doesn’t create new criminal charges for AI developers whose models were exploited to create societal harm if they had tested their systems and taken steps to mitigate risks, Wiener said.
“This bill is not going to send any AI developers to prison,” Wiener said. “I would ask folks to stop making that claim.”
Under the bill, only the state attorney general could pursue legal actions in case of violations.
Democratic Gov. Gavin Newsom has touted California as an early AI adopter and regulator, saying the state could soon deploy generative AI tools to address highway congestion, make roads safer and provide tax guidance. At the same time, his administration is considering new rules against AI discrimination in hiring practices. He declined to comment on the bill but has warned that overregulation could put the state in a “perilous position.”
A growing coalition of tech companies argue the requirements would discourage companies from developing large AI systems or keeping their technology open-source.
“The bill will make the AI ecosystem less safe, jeopardize open-source models relied on by startups and small businesses, rely on standards that do not exist, and introduce regulatory fragmentation,” Rob Sherman, Meta vice president and deputy chief privacy officer, wrote in a letter sent to lawmakers.
Opponents want to wait for more guidance from the federal government. Proponents of the bill said California cannot wait, citing hard lessons they learned by not acting soon enough to reign in social media companies.
The proposal, supported by some of the most renowned AI researchers, would also create a new state agency to oversee developers and provide best practices.
State lawmakers were also considering Tuesday two ambitious measures to further protect Californians from potential harms from AI. One would fight automation discrimination when companies use AI models to screen job resumes and rental apartment applications. The other would prohibit social media companies from collecting and selling data of people under 18 years old without their or their guardians’ consent.
Related story: Could your vote be swayed by artificial intelligence this November?