On the 17th day of Guido Richstator, he said that he was feeling fine – a little slow, but fine.
Each day since September 2, the Rechstadler AI Startup appeared outside the San Francisco headquarters of Anthropic, standing from 11 am to 5 pm from 11 am to about 11 pm. His chalkboard sign stated that “Hunger Strike: Day 15”, although he actually stopped eating on 31 August. The concept of “Artificial General Intelligence” or AGI: AI system concept for sine anthropic that equals or crosses human cognitive abilities.
A preferred rally by AGI Tech CEO is Rona, with big companies and leaders of startups to get the first subjective milestone equally. For Reichstadler, this is an existent risk that these companies are not taking seriously. “Trying to create AGI-human-level, or beyond, system, superintending-this is the goal of all these marginal companies,” he said that The verge “And I think it’s crazy. It’s risky. It is incredibly risky. And I think it should stop now.” A hunger strike is the most obvious way he sees to attract the attention of AI leaders – and right now, he is not the only one.
Reichstadter referred A 2023 interview Where the anthropic CEO Dario Amodi is that he says that the AI gives an example of the negligence of the industry. “My opportunity is that something on the scale of human civilization goes quite disastrous, can be somewhere between 10 and 25 percent,” said Amodi. Amodei and others have concluded that the development of AGI is inevitable and says that their goal is only possible to be the most responsible patron-some reichstadtler says “a myth” and “self-service”.
In Reichstadter’s perspective, companies have a responsibility not to develop technology that will harm people extensively, and whoever understands the risk also takes some responsibility.
He said, “I am trying to do this, who is fulfilling his responsibility as a general citizen, which is for life for life and the good of my fellow citizens, my fellow countrymen,” he said. “I have also found two children.”
Anthropic did not immediately respond to a request for comments.
Every day, the richstator said he waves security guards in the anthropic office because he sets up, and he sees that he closes his eyes to anthropic staff because he walks near him. He said that at least one employee has shared some similar apprehensions of devastation, and he inspires AI company employees “do not dare to act as humans and not as their company’s equipment” because they have a deep responsibility because they are “developing the most dangerous techniques on earth.”
Their fear is shared by countless others in the AI security world. It is a clear community, in which AI is on long-term with disagreeing threats on specific threats and is best to stop them-even the word “AI security” is also frightening. One of them may agree on the thing, however, its present path is ill for humanity.
Reichstadter said that he was aware of the ability for “human-level” AI during his college years about 25 years ago for the first time and after that, it seemed far away-but in 2022 with the release of the chatgipt, he sat down and took notice. He says that he is particularly worried about how he believes
“I am worried about my society,” he said. “I am worried about my family, their future. I am worried about what is happening to AI, what is happening to impress them. I worry that it is not being used morally. And I also worry that it is a realistic basis to believe that it has horrific risks and even the risk of existence related existence.”
In recent months, Reichstadter has tried rapidly public ways to draw the attention of technical leaders, which they believe is important. He has worked with a group called “Stop AI” in the past, which permanently tries to ban the superintendent AI systems “to prevent human extinct, massive job loss and many other problems.” In February, he and other members Help the chain closing the doors In OpenaiI’s offices in San Francisco, some of them, including a rechistator, were arrested for blockage.
Reichstadter gave a handwritten letter to Amodei on 2 September through anthropic security desk, and a few days later, he posted it online. The letter requests that Amodei stops trying to develop a technique that he cannot control – and globally does everything in his power to stop the AI race – and if he is not ready to do so, to tell him why not. In the letter, the Rechstader wrote, “With the urgency and gravity of our situation in my children and in my heart, I have started a hunger strike outside anthropic offices … while I am waiting for your response.”
“I hope he has the original decency to respond to that request,” said the Rechstator. “I don’t think any of them are really individually challenged. It is anonymously, abstract, consider that the work you are doing can kill many people. It is face to face with one of your potential future victims and explaining them as a human being (why).”
Soon after Reichstadter started his peaceful protest, two others inspired by him began a uniform protest in London, maintaining an appearance outside the office of Google Deepmind. And one joined him In IndiaFasting on Livestream.
Michael Trezzi participated in the London Hunger Strike for seven days before stopping due to two near-facing episodes and a doctor consultation, but he is still supporting other participant, Dennis Shermets, which is on Day 10Treesy and richstaders share the same apprehensions about the future of humanity under the continuous progress of AI, although they are reluctant to define themselves as part of a specific community or group.
Traizy said that he has been thinking about AI’s risks since 2017. He wrote a letter to Deepmind CEO Demis Hasabis and posted it publicly, as well as passed it through a mediator.
In the letter, Traiy asked that Hasabis today took the first step towards the coordination of the future stop at the development of superintendent, publicly stating that the Deepmind Frontier would agree to stop the development of the AI model, if all other major companies in the West and China were also to do so. Once all the major companies agreed to an international agreement,
Trazie told Ruckus“If it was not very dangerous for AI, I don’t think I will … super pro-regulation, but I think there are some things in the world, which are default, incentive (in the wrong direction). I think for AI, we need regulation.”
Amanda Carl Pratt, Director of Communications of Google Deepmind, said in a statement, “AI is a fast -growing place and will have different views on this technology. We consider AI to pursue science in the capacity of AI and to improve billions of people in people’s lives.
One in Post on XTraizy has written that the Hunger strike has held a lot of discussion with technical staff, claiming that a meta employee asked him, “Why are only Google people? We also do good work. We are also in the race.”
He also wrote in the post that a deepmind employee said that AI companies may not probably issue models that could cause dreaded damage due to opportunity costs, while another, he said, “He admitted that he believed that he was not more likely than AI being extinct, but to work for deepminds because it was still one of the most security companies.”
Neither the Rechstater nor the Treads have received reactions to Hasabis and Amodi so far from their letters. (Google also refused to answer a question Ruckus Why Hasbis has not responded to the letter.) They have confidence, however, as a result of their actions, an acknowledgment, a meeting, or ideally, a commitment from the CEO is to change their trajectory.
For the Rechstater, “We are in an uncontrolled, disaster in the global race,” he said. “If there is any way, it is going to rely to tell people the truth and to be ready to say,” We are not under control. ” Ask for help. “
0 Information

