
Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- California’s new AI safety law will go into effect on January 1.
- It focuses on transparency and whistleblower protection.
- Some AI security experts say the technology is developing too fast.
California’s new law taking effect Thursday, Jan. 1, aims to add a measure of transparency and accountability to the AI industry, at a time when some experts are warning that the technology could potentially escape human control and cause havoc.
Originally written by state Democrat Scott Wiener Law Companies developing Frontier AI models are required to publish information detailing their plans and policies for responding to “catastrophic risks” on their websites and to notify state authorities of any “significant security incident” within fifteen days. Fines for failing to meet these conditions can be up to $1 million per violation.
Too: Why complex logic models could make it easier to catch misbehaving AI
The new law also provides whistleblower protection to employees of companies developing AI models.
The law defines catastrophic risk as a scenario in which an advanced AI model kills or injures more than 50 people or causes physical damage worth more than $1 billion, for example by providing instructions to develop chemical, biological or nuclear weapons.
“Unless they are developed with careful diligence and appropriate precautions, there is concern that advanced artificial intelligence systems may have capabilities that could pose catastrophic risks from both malicious use and malfunction, including artificial intelligence-enabled hacking, biological attacks, and loss of control,” the authors of the new legislation wrote.
security concerns
California’s new law outlines some of the fears — and aims to ease them — that are lingering in the minds of AI safety experts as the technology rapidly grows and evolves.
Canadian computer scientist and Turing Award winner Yoshua Bengio recently told The Guardian Citing research that said the AI industry has a responsibility to implement a kill switch on its powerful models if they escape human control, research shows that such systems can sometimes hide their motives and mislead human researchers.
Last month, a paper published by Anthropic claimed that some versions of the cloud were showing signs of “introspective awareness.”
Too: Cloud receives high praise from Supreme Court judge – is the streak of AI legal defeats over?
Meanwhile, others are claiming that progress in AI is moving dangerously fast – too fast for developers and lawmakers to be able to impose effective guardrails.
A statement published online in October by the non-profit organization Future of Life Institute argued that unrestricted growth in AI could lead to “human economic obsolescence and powerlessness, loss of freedom, civil liberties, dignity and control, national security risks, and even potential human extinction”, and called for a moratorium on the development of advanced models until rigorous safety protocols are established.
FLI conducted a study which revealed that eight major developers were not meeting security-related criteria, including “governance and accountability” and “existential risk”.
Federal, State and Private Sector
California’s new law also stands in sharp contrast to the Trump administration’s approach to AI, which so far has been essentially, “go ahead and multiply.”
President Donald Trump has dismantled Biden-era regulation of the technology and given the industry wide latitude to move forward with the development and deployment of new models, eager to maintain a competitive edge over China’s own AI efforts.
Too: China’s open AI models are a hit with Western countries – here’s what happens next
So the responsibility for protecting the public from the potential harms of AI has largely been handed to state lawmakers, such as Wiener, and the technology developers themselves. On Saturday, OpenAI announced that its Safety Systems team is hiring a new “Chief of Preparation” The role, which will be responsible for building a framework for testing model security, will offer a $555,000 salary, plus equity.
“This is an important role at a critical time,” company CEO Sam Altman wrote in an article. x post Regarding the new situation, “Models are improving rapidly and are now capable of many great things, but they are also starting to present some real challenges.”
(Disclosure: ZDNET’s parent company Ziff Davis filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in the training and operation of its AI systems.)

