AI Policy Update: California’s AI Safety Bill

"Trish, I just read about California’s new AI bill. What does it do and is it a good or bad…

Sep 3, 2024

Share this...

By Trisha Prabhu

“Trish, I just read about California’s new AI bill. What does it do and is it a good or bad thing?”

Hi there, and welcome back to another week of Ask Trish! I hope you’re all well and having a wonderful start to September. (For those folks in the US, I hope you enjoyed a restful, rejuvenating long weekend.) 

Thank you so much to this week’s question-er for the fantastic, timely question. Indeed, we have a major AI policy update to dig into: last Wednesday, California’s State Assembly passed a measure that aims to create safeguards around AI development and hold AI accountable for any harms it might engender in the real world. No other state has passed a bill like this…so this is a pretty big deal. It’s also, of course, a big deal because so many prominent American AI companies, from relative newbies like OpenAI to classic Big Tech firms like Google are based in California…so if signed into law by Governor Gavin Newsom, this bill will have a huge impact. And that impact may not be contained to California: because so few other states have a blueprint for AI, many will be looking to California and possibly replicating its approach.

So yes, this is pretty high-stakes stuff. Which then leads to your question: what exactly is in this bill, and how do people feel about it? This week, I break it down for y’all. I’ll briefly i) dive into what this bill does and ii) talk through different folks’ perspectives on it. Sound good? Let’s get into it.

First up…what exactly is in this bill? Put simply, this bill aims to do two things: i) require AI companies to test their large language models for safety and ii) give the California State Attorney General (basically, the lawyer for the state) the power to sue AI companies for any serious harms, e.g., death, caused by their products. With respect to the first provision – the goal is to require companies to proactively think through/reduce the safety risks posed by their products. Not only would AI companies have to do that testing, they would have to publicly disclose their safety protocols, so as to prevent models from being manipulated in disastrous ways. With respect to the second provision – the goal is to empower the state (and thus the public) to hold AI companies accountable for harm. Authors hope that this will not only allow the state to push back on AI companies where necessary, but will (similar to the first provision) encourage and incentivize AI companies to think twice before releasing a product that may cause harm. Importantly, this bill would not apply to every AI company out there…instead, it would only apply to systems that require more than $100m in data to train. (I know! So much money!) Thus far, no AI model has hit that target…so the legislation wouldn’t make an impact straightaway. Instead, it would lay the groundwork for regulating the AI models of the future, AI that will surely make a tremendous impact on our society.

Okay…so now we know what’s in the bill. But how do people feel about it? Short answer: It’s complicated. Longer answer: This bill is quite contentious. Advocates, including the bill’s author, Democratic Senator Scott Wiener, argue that the bill is “light touch” and would set an important floor for AI safety. Many supporters point to horrifying examples of AI harms, like non-consensual sexual deepfakes, and other harms with important implications for society, like political AI deepfakes, as evidence that action is needed – now. They say that we can’t make the same mistakes that we did with Web 2.0 – waiting to take action until it was too late, until too many digital citizens had been harmed by or hooked to social media. Moreover, they push back against the notion that innovation and safety are mutually exclusive; in their view, they can go hand-in-hand – and that is exactly what this bill does.

But on the other hand, lots of folks (including plenty of prominent people!) oppose the measure. Former Speaker of the House Nancy Pelosi called it “well-intentioned but ill-informed.” Members of the tech community have adopted a similar stance, arguing that the legislation will inadvertently stifle AI innovation before it has a chance to get off the ground. Folks also worry that the bill is based on extreme, unlikely fears of AI. And tech companies Open AI, Google, and Meta have opposed the legislation on the grounds that what’s really needed is federal legislation – one, unified federal standard, as oppose to piecemeal, state-by-state legislation, which, they say, will be difficult to comply with. They also argue that the bill unfairly puts the burden of safety on tech companies, rather than on those who misuse AI.

Who’s right? That’s for you to decide. Either way, I hope this post has left you with a better understanding of California’s AI safety bill. There’s definitely much more to come here…as noted, though passed by the State Assembly, Governor Newsom has yet to decide if he’ll sign it into law…so keep your eyes peeled for that decision! And in the meantime, keep the questions coming! Share any internet-related ponderings with me here

Have a great week,

Trish

@asktrish

As you may have seen, CA’s Assembly recently passed a major AI safety bill…and next, it’ll go to the Governor’s desk. What does the bill say? And why are some folks ardent supporters, while others oppose the measure? Get the full scoop in this week’s post — link in bio ⬆️

♬ original sound – Ask Trish

 


Share this...