California Gov. Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act on Monday, a law that requires advanced AI companies to be fully transparent about their safety protocols. This act marks the first AI safety law in the nation.
S.B. 53 will require AI companies that generate at least $500 million in annual revenue to publicize their safety practices — such as cybersecurity, risk management and human oversight.
The bill also details the creation of a new government-sponsored group that will “advance the development and deployment of artificial intelligence that is safe, ethical, equitable and sustainable by fostering research and innovation.”
“Technology and innovation should benefit society,” said Stephen Gibler, an Adjunct Professor at the USC School of Cinematic Arts and a faculty member for the USC Center for AI in Society. “Hopefully, this legislation gets these companies to think if [what they’re doing] is safe. Ideally, it leads to a culture shift that pushes in that direction.”
AI usage has also been shown to promote self-destructive behaviors among vulnerable people. A study conducted by Common Sense Media found there was an “unacceptable” risk level associated with minors using AI companions.
The act’s safety regulations will also encourage a healthy competition among companies developing AI tools, said Angela Zhou, an assistant professor for data sciences and operations at USC Marshall and a faculty member at the USC Center for AI in Society.
“It encourages competing on safety and more reliable user experiences,” said Zhou. “I think this will really just push the industry towards improving long-term impacts.”
On Forbes’ 2025 AI 50 list, more than half of the top 50 AI companies were based in California. Newsom wrote in a press release that “California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance.”
Jaspreet Ranjit, a USC postdoctoral student in computer science at Viterbi, believes that the new regulations will benefit the state of California.
“California is basically signaling that safety and accountability are not necessarily roadblocks to innovations, but they can be viewed as prerequisites,” said Ranjit. “And, if done well, I think that this could really position California as a leader in trustworthy AI development, and it might set a model for that.”
According to the New York Times, the act also protects whistleblowers at AI companies who speak out about the risks associated with their AI tools. This comes after former Meta employee Sarah Wynn-Williams alleged in April that Meta’s AI model, Llama, was used to help the Chinese AI company DeepSeek. She faced legal backlash after coming forward.
“Historically, whistleblowers have faced significant scrutiny, both in the media and potentially by their employers, so I think that protecting them under this act is really critical in upholding or holding industry accountable,” said Ranjit. She feels that this protection will place a newfound responsibility on employees at advanced AI companies.
As a student, Ranjit wonders if professors and students will be more open to learning more about AI tools now.
“I think a lot of the time, AI can be viewed in a very negative light, especially in the education sector,” said Ranjit. “I’m leaning more optimistic and positive on incorporating these tools in the classroom, given that professors are going to have more context behind what goes into the development.”