On Monday, more than 200 pre-state heads, diplomats, Nobel Prize winners, AI leaders, scientists, and all other agreed on one thing: “Red lines” should have an international agreement that AI should never exceed-for example, for example, AI does not allow a human being or self-devotion.
They, with more than 70 organizations addressing AI, all have signed global calls for AI Red Lines Initiative, A call for governments to reach a “international political agreement ‘on’ Red lines’ for AI by the end of 2026. Signators include British Canadian computer scientist Jeffri Hinton, Openi Cofounder Wojcich Zaremba, Anthropic Siso Jason Clinton, Google Dipmind Research Ian Gudfello and others.
The French Center (Cesia) Executive Director for AI Safety (Cesia), Charbel-Rafael Cegeri said during briefing with reporters on Monday, “There is no goal after a big incident … but on a large scale, potentially, potentially to prevent irreversible risks,” French Center for AI Safety (Cessia), Charbel-Rafti Sehgary Told during briefing with reporters.
He said, “If the nation cannot yet agree on what they want to do with AI, then at least they should agree on what AI should never do.”
The announcement is ahead of the high-level week of the 80th United Nations General Assembly in New York, and the initiative was led by Sisia, The Future Society and UC Berkeley Center for Human-Sangat Artificial Intelligence.
Nobel Peace Prize winner Maria Ressa mentions the initiative during her opening remarks In the assembly when “calls are made for efforts to eliminate large technical impurities through global accountability.”
Some regional AI red lines exist. For example, the AI ​​Act of the European Union that bans some uses of “unacceptable” AI within the European Union. There is also an agreement between America and China Nuclear weapon Should be subject to human, not AI, control. But there is no global consent yet.
Over the long term, more than “voluntary vows” requires more, Nikki Ilyadis, director of the global rule of AI in future society, told reporters on Monday. Responsible scaling policies made within AI companies are “lower for real enforcement.” Eventually, an independent global institute is required to define, monitor and apply red lines “with teeth”, he said.
“They cannot build AGI as long as they can make it safe, how to make it safe,” said Stuart Russell, a professor of computer science and a prominent AI researcher in UC Berkeley. “The way nuclear power developers did not build atomic plants, until they had some ideas how to prevent them from exploding, the AI ​​industry should choose a separate technology path, a one that makes security from the beginning, and we should know that they are doing so.”
Russell said that red lines do not disrupt economic development or innovation, as some critics of AI regulation argue. “You can do AI for economic development without AGI, we don’t know how to control,” he said. “It is believed that it is dicotomy, if you want a medical diagnosis you have to accept the world-destructive AGI-I feel that it’s nonsense.”
0 Information

