
AI is a brief name I am hearing several times a day, and usually only with a 30% hit rate of being used for the correct real thing. LLMs like Chatgbt and Deepsek are in constant news, while we talk about putting AI into everything from our gaming chips. It is easy to dismiss it as a pop-summit phase, just as uranium fever caught the glove with nuclear anxiety in the past.
Comparison between launching A-Bum and AI may seem hyperbolic, but Mentor AI experts have called for a safety test for a security test for the trinity test for the first explosion of an atomic weapon.
Max Tagmark, a professor of physics and AI researcher at MIT -with his three students Published a paper Recommendation of a similar approach. In this letter they say for the necessary calculations whether any advanced AI can get out of the control of humans. This test is compared to those by Arthur Competon to those who detect the possibility of an atomic bomb explosion in the atmosphere before the trinity occurs.
In those tests, Competon approved the possibility of such an explosion to go beyond the Trinity after declaring the possibility of declaring the possibility of being slightly reduced in three million. While calculating similarly, Tagmark has found that it is 90% likely that a highly advanced AI can pose its threat to humanity, as unlike Windows Bugs. Currently this level of theoretical AI has been described as an artificial super intelligence or ASI.
The calculations have assured Tegmark that safety implementation is required, and companies have the responsibility to check for these possible threats. He also believes that a standardized approach has been agreed and calculated by several companies that companies need to create political pressure to comply.
“Companies that manufacture super-intelligence also need to calculate the Competon Constant, likely we will lose control over it,” he said. “It is not enough to say ‘We feel good about it’. They have to calculate the percentage.”
This is not the first shock of Tagmark for more rules and thought to go to create new AIS. He is also a co-founder of a non-profitable who is for the future of Jeevan Institute towards the development of safe AI. The institute published an open letter in 2023, calling for a stagnation to develop powerful AIS, which received the attention and signature of people like Elon Musk and Steve Wozniaq.
Tagmark also worked on the consensus of Singapore on the report of global AI security research priorities with world-agary computer scientist Yoshua Bengio as well as Open AI, Google and Deepmind researchers. It seems that if we ever release an ASI on the world, we will least the exact percentage opportunity that it is to end all of us.

