
The debate on the risks and losses of artificial intelligence often focuses on what governments can do or what can do. However, there are equally important options that AI researchers themselves create.
This week, in Singapore, more than 100 scientists around the world proposed guidelines how researchers should contact to make AI more “reliable, reliable and safe”.
Also: Some secret AI companies can crush free society, researchers warned
The recommendations come at a time when veterans of generic AIs like Openi and Google have reduced the rapid revelation about their AI models, so the public knows less and less how the work is held.
Guidelines In Singapore last month increased by an exchange among scholars, one of the most prestigious conferences on AI, in combination with the International Conference on the learning representation – a major AI conference in Asia has been held for the first time.
The document, “Global AI Safety Research Unanimous Conscience on Priorities,” Posted on the website At the Singapore conference on AI, a second AI conference is being held in Singapore this week.
Among the publishers helping to prepare the draft of Singapore’s consensus, the founder of the AI Institute of Canada, the founder of Mila is Yoshu Bengio; Stuart Russell, UC Berkeley, a professor of computer science, and an expert on “human centered AI”; The future of the Max Tegmark, the UK -based Think Tank -based Major Life Institute; And representatives of Massachusetts Institute of Technology, Deepmind Unit of Google, Microsoft, National University of Singapore and Singhhua University and National Academy of Sciences of China.
To make the matter that research should have guidelines, Josephine Tio, Minister of Digital Development and Information of Singapore, said to present the work that people cannot vote for what kind of AI they want.
“In democracy, general election is a way for citizens to choose a party that forms the government and to decide on their behalf,” Tea said. “But in AI development, citizens do not get to make a uniform option. Although we say that there is a democratization of technology, citizens will be at the end of AI opportunities and challenges, without any thing that who shapes its trajectory.”
Too: Google’s Gemini AI continues the dangerous barrier of technology
Paper explains three categories that researchers should consider: how to identify risks, how to build AI systems, to avoid risks in this way, and how to control AI systems, meanings, ways to monitor and interfere in the matter of concerns about those AI systems.
The authors wrote in the preamble for the report, “Our goal is to enable more impressive R&D efforts to develop the security and evaluation mechanisms rapidly and to promote a reliable ecosystem, where AI is exploited for public good.” “Inspiration is evident: there is no benefit of an organization or country when the AI events or malicious actors are capable, as the resulting loss will horrify everyone.”
On the first score, assessing potential risks, scholars recommend the development of “metrology”, measurement of potential losses. They write that the requirement of quantitative risk evaluation is required to suit the AI system to reduce uncertainty and require large safety margin. ,
With a balance for the protection of corporate IP, AI for Risk requires allowing external parties to monitor AI research and development for research and development. This involves “developing a safe infrastructure which enables fully evaluate, protecting intellectual property, including the model theft.”
Too: Stuart Russell: Will we choose the right objective for AI before destroying all of us?
The development section worries how AI is reliable, reliable and safe “by design”. To do this, “technical methods” needs to be developed that can specify what is intended from the AI program and can also underline what not to be there – “unwanted side effects” – scholars write.
The actual training of the nerve mesh is then needed to upgrade in such a way that the resulting AI program “guarantee to complete their specifications”, they write. This includes parts of the training, which focus on, for example, cracking an LLM with malicious signs such as “reducing confusion” (often known as hallucinations) and “increasing strength against tampering,”.
Final, the control section of paper includes both include how to expand the current computer safety measures and how to develop new techniques to avoid the runway AI. For example, traditional computer control, such as off-switch and overrid protocol, must be extended to handle AI programs. Scientists need to “designing new techniques to control very powerful AI systems that can actively weaken efforts to control them.”
The paper is ambitious, which is appropriate given the increasing concern about the risk from AI as it connects more and more computer systems, such as agentic AI.
Also: Multimodal AI creates new security risk, CSEM and weapon make information
As scientists have accepted in introduction, research on security will not be able to keep the AI with rapid pace until more investments are made.
“Given that the science situation for the creation of reliable AI today does not cover all the risks completely, quick investment in research is necessary to keep pace with business capabilities with commercially operated growth in system capabilities,” write to authors.
Write Time magazineThe Bengio runway resonates concerns about the AI system. Bengio writes, “Recent scientific evidences also show that, highly capable systems become rapidly autonomous AI agents, they display goals that were clearly not programmed and not necessarily alliances with human interests,” Write Bengio.
“I am really uncontrolled by unrestrained AI behavior, which is already performing, especially in self-protection and deception.”
Want more stories about AI? Sign up for innovationOur weekly newspapers.