
Since generic AI has spread in recent years, there is also a possibility of misuse and misuse of technology.
Tools such as Chatgpt can produce realistic lessons, pictures, videos and speech. The developers behind these systems promise productivity benefits for businesses and increase human creativity, while many security experts and policy-makers concern about the impending increase of misinformation amidst other dangers, which these systems enable.
Also: AI Pioneer Yoshu Bengio AI what is doing next to make safe
Openai – Surely publishes an annual report highlighting in this ongoing AI race – its AI system is being used by bad actors. “AI investigation is a developed discipline,” the company wrote in the latest version of its report, Issued Thursday. “Every operation we disrupt, it gives us a better understanding of how the actor of danger is trying to misuse our model, and enables us to refine our defense.”
(Disclosure: ZDNET’s original company Ziff Davis filed a case of April 2025 against Openai, alleging that it violates Ziff Davis copyright training and operating its AI system.)
The new report has 10 examples of misconduct for the last one year, out of which four are coming from China.
What did the report get
In each of the 10 cases mentioned in the new report, Openai explained how it was revealed and addressed the problem.
One of the cases with potential Chinese origin, for example, found chatgpt accounts that generate social media posts in English, Chinese and Urdu. A “main account” will publish a post, then follow with other comments, all of which were designed to create confusion of authentic human connection and attract attention around politically charged topics.
According to the report, the disciplines of the topics – Taiwan and USAID include – “All are closely alliances with the interests of China’s territory.”
Also: AI bots are scrapping your data? This free device gives those Pesky Craler run-around
Another example of misuse, which was directly related to China according to openi, used mat using mats, such as the password “brutiforceing”-trying to try a large number of AI-borne passwords in an attempt to break into accounts-and research on a publicly available records about the American military and defense industry.
China’s Foreign Ministry has denied any participation with activities mentioned in Openai’s report, As Roots,
Other threatened uses of AI mentioned in the new report were reportedly linked to actors in Russia, Iran, Cambodia and other places.
Cat and mouse
Text-generating models such as Chatgpt are only likely to start the viewer of AI’s wrong information.
The text-to-video model like Google’s Veo 3 can generate realistic videos from natural language signals. Text-to-spicch model, meanwhile, such as new V3, can easily generate human voices.
Also: Text-to-speech with felt-This new AI model does everything but sheds a tear
Although developers typically apply any type of railing before deploying their model, the new actor – Openi’s new report clearly – are becoming more creative in their misuse and misuse. Both sides have been locked in cats and mouse games, especially in the US, there are no strong federal inspection policies currently.
Want more stories about AI? Sign up for innovationOur weekly newspapers.