
Follow ZDNET: Add us as a favorite source On Google.
Key takeaways of zdnet
- FTC AI is investigating seven technical companies by the construction of colleagues.
- The investigation is searching for safety risks to children and adolescents.
- Many technical companies offer AI colleagues to promote user’s busyness.
Federal Trade Commission (FTC) Children and adolescents, the agency is checking the security risks generated by AI colleagues Announced Thursday.
The federal regulator presented orders to seven tech companies that manufacture consumer-supporting AI fellow equipment-to provide Instagram, Meta, OpenAI, Snap, XAI, and character technologies (the company behind the chatbot creation platform character)-how to provide their equipment and protect them, as well as protect them.
Also: Even Openai CEO Sam Altman feels that you should not trust AI for medical treatment
The agency wrote in release, “FTC inquiry is to understand what steps, if any, to evaluate their chatbott safety, when worked as partners, to the use of products on children and adolescence and limit the potential negative effects, and to make users and mother -father of risks related to products,” the agency has written in release.
Those orders were issued under Section 6 (B) of the FTC Act, which gives the agency the right to investigate businesses without a specific law enforcement objective.
Ai’s rise and fall (out)
Many technical companies have started offering AI partner tools in an attempt to mud the generic AI system and promote user’s busyness with existing platforms. Mark Zuckerberg is also the founder and CEO of Meta Claimed These virtual partner, who take advantage of the chatbot to answer users questions, can help reduce loneliness epidemic.
Elon Musk’s XAI recently added the company’s $ 30/month “Super Groke” Membership Tier (Grocke App Currently “the company’s $ 30/month added two bubbly AI colleagues in the company’s $ 30/month Available Users to 12 years and at app store). In the last summer, the meta started Roll a feature This allows users to create custom AI characters in Instagram, WhatsApp and Messenger. Other platforms such as replication, paradott and character.AI are clearly built around the use of AI peers.
Also: Anthropic states that the cloud helps to support users emotionally – we are not convinced
While they differ in their communication styles and protocols, AI partners are usually engineers to mimic human speech and expression. Essentially working within a regulatory vacuum to disrupt them with very low legal railings, some AI companies have taken a morally suspicious approach to the manufacture and deployment of virtual peers.
An internal policy memorandum from meta Reported by Reuters Last month, for example, the company has allowed the Meta AI, its AI-managed virtual assistant, and other chatbots to be included in a series of other sensitive topics such as “romantic or sensitive to attach a child in conversation” in conversation “and other sensitive subjects such as” romantic or sensitive subjects to engage a child in conversation.
Meanwhile, there is an icy storm of recent reports of users who develop romantic bonds with their AI peers. Openai and Characation.Ai are currently being sued by the parents who have alleged that their children have committed suicide after Chatgpt encouraged to do so respectively and a bot has been hosted on character. As a result, Openai updated the railing of the chatgpt and stated that it would expand the safety and safety precautions of the parents.
Also: Doctors trust AI medical advice on patients – even when it is wrong, it is still found.
However, AI partner is not a fully uncontrolled disaster, though. For example, some autistic people have used them from companies like replication and paradott. Virtual conversation partner To practice social skills which can then be applied with other humans in the real world.
Protect children – but at the same time, keep construction
Under the leadership of its previous chairman Leena Khan, FTC made several inquiries in technical companies to probably investigate anticomatic and other legally suspicious practices, such as “” “” “”Monitoring pricing,
The federal investigation on the technical sector has been done more comfortable during the second trump administration. The President quashed the executive order of his predecessor on AI, which sought to implement some restrictions around the deployment of technology, and his AI action plan has been explained as a green light for the industry on a large scale, which to train the new AI model to train expensive, energy-intensity infrastructure to train the new AI model, to move forward to proceed with more competitions from China’s AI efforts to proceed with the construction of an expensive, energy-intensity infrastructure structure. For.
Also: Are you worried about AI’s growing energy needs? Avoiding chatbots will not help – but 3 things can happen
The language of the new FTC’s new investigation into AI peers clearly reflects the permissible, construction-first approach of the current administration to the AI.
Agency Chairman Andrew N. Ferguson wrote in a statement, “Save children online Trump-Vance is the top priority for FTC, and therefore promoting innovation in important sectors of our economy.” “As AI technologies develop, it is important to consider that chatbots can occur on children, while also ensure that the United States maintains its role as a global leader in this new and exciting industry.”
Also: I used this chat trick to search for coupon code – and tonight saved 25% at my dinner.
In the absence of federal regulation, some state officials have taken initiative to curb certain aspects of the AI industry. Last month, Texas Attorney General Kane Paxon Started an inquiry Meta and Character. In AII “potentially marketed to engage in misleading trade practices and misleading themselves as mental health tools.” First month, Illinois Applied a law Preventing AI chatbots from providing medical or mental health advice, has been fined up to $ 10,000 for AI companies that fail to comply with.

