“Agent AI system is being made weapons.”
It is one of the first rows of anthropic New threat intelligence reportFrom today, which Details of wide range of cases in which cloud – And possibly many other major AI agents and chatbots – are being misused.
First of all: “Vibe-hacking.” A sophisticated cybercrime ring that anthropic says that he recently used cloud code to extract data from at least 17 separate organizations around the world, AI coding agent of Anthropic. The hacked parties included health organizations, emergency services, religious institutions and even government institutions.
“If you are a sophisticated actor, what otherwise it would be necessary that perhaps a team of sophisticated actors, such as a vib-hacking case, to conduct-now, can conduct with the help of a single person agent system,” Jacob Kalen, Head of anthropic team, stated, said, stated. Ruckus in an interview. He said that in this case, Cloud was “executing Operation and-to-end”.
Anthropic wrote in the report that in such cases, AI “acts as both a technical advisor and active operator, which enables attacks that will be more difficult and time -taking for individual actors to execute manually.” For example, Cloud was used especially to write “psychologically targeted forcible recovery”. The cyber criminal then discovered that data – including healthcare data, financial information, government credentials, and much more – will be worth on the dark web and the ransom demand is more than $ 500,000, per anthropic.
“This is the most sophisticated use of agents I have seen … for cyber crime,” Klein said.
In a study of another case, Cloud helped North Korean IT workers to work in Fortune 500 companies in the US to fund the country’s weapon program. Usually, in such cases, North Korea tries to take advantage of those who have gone to college, or there is some ability to communicate in English per clain, per clain – but he said that in this case, in this case, there is little obstruction to people in North Korea to pass technical interviews in large technology companies and then keep their jobs.
With the help of Cloud, Klein said, “We are looking at people who do not know how to write codes, do not know how to communicate professionally, know very little about English language or culture, who are just asking Cloud to do everything … and then once they are working, they are actually working with Cloud.”
Another case study included a romance scam. A telegram bot with over 10,000 monthly users advertised clouds as a “high EQ model” to help generate emotionally intelligent messages for scams. This enabled non-indigenous English speakers to write inspiring, compatible messages to gain confidence of victims in the United States, Japan and Korea, and asked for money from them. An example in the report showed a user uploading the image of a man in a tie and asked how to praise him.
In the report, the anthropic itself accepts that although the company has “developed sophisticated safety and safety measures” to prevent its misuse of AI, although the measures “are generally effective,” Bad actors still manage to find ways around them. Anthropic says AI has reduced obstacles for sophisticated cyber crime and uses technology to profile victims, to automate their practices, to automate their practices, make false identity, analyze stolen data, steal credit card information, and do more.
The report in each case study increases the increasing amount of evidence of AI companies, as they can, often try to try to keep with the social risks associated with the technology they are making and putting in the world. “While being specific to the cloud, the case studies below reflect the frequent patterns of behavior in the AI model,” the report said.
Anthropic stated that for the study of every case, it banned the concerned accounts, constructed new classifier or other detection measures, and shared information with appropriate government agencies, such as intelligence agencies or law enforcement, Klein confirmed. He also said that the matter that his team studied is part of a comprehensive change in AI risk.
Klein said, “This change is happening, where the AI systems are not just a chatbot because they can take several steps now,” they say, “they are really able to conduct action or activity as we are looking here.”
0 Information