Character.AI is gradually closing chat for people under 18 and introducing new ways to detect whether users are adults. The company announced Wednesday that “open-ended chats” with its AI characters for users under 18 will be limited to two hours immediately, and that the limit will reduce to a complete ban on chat by November 25.
In the same announcement, the company says it is rolling out a new in-house “age assurance model” that classifies a user’s age based on the type of characters they choose to chat with, in combination with other on-site or third-party data information. Both new and existing users will be run through an age model, and users marked as under 18 will be automatically directed to the company’s teen-safe version of chat, which it launched last year leading up to the November cutoff. Adults who are mistaken for minors can prove their age on the third-party verification site Persona, which will handle the sensitive data needed to do so, such as showing a government ID.
Following the ban, teens will still be allowed to revisit old chats on the site and use non-chat features, such as creating characters and creating videos, stories, and streams featuring the characters. Character.AI CEO Karandeep Anand acknowledged The Verge However, users spend a “much smaller percentage” of time on these features than the company’s flagship chatbot conversations — which is why limiting chatbots is a “very, very bold move” for the company, he said.
Anand told The Verge One interviewee said that “less than 10 percent” of the company’s users identify themselves as being under the age of 18. He said the company has no way of finding out the “real number” until it starts using the new age detection model. The number of minors has declined over time, he said, as Character.AI implemented restrictions for underage users. “When we started making changes to under-18 experiences at the beginning of the year, our under-18 user base shrank as those users moved to other platforms that are not as safe,” Anand said.
Character.AI has been sued over wrongful death and allegations of negligence and deceptive business practices by parents who say their children were pulled into inappropriate or harmful relationships with chatbots. The lawsuits target the company and its founders, Noam Shazir and Daniel de Freitas, as well as the founders’ former workplace, Google. Character.AI has repeatedly amended its services in the wake of the lawsuits, including Directing users to the National Suicide Prevention Lifeline When certain phrases related to self-harm or suicide are used in chat.
Lawmakers are attempting to curb the growing industry of AI companions. A California bill passed in October requires developers to make clear to users that chatbots are AI, not humans, and a federal law proposed on Tuesday would impose a blanket ban on providing AI companions to minors.
In addition to the teen model, the company has previously launched features like a voluntary ‘Parental Insights’ feature, which sends parents a summary of a user’s activity, though not a full log of their chats. But these features depend on the user’s self-reported age, which is easily fakeOther AI companies have recently changed their policies after banning teenage users, like Meta reuters Reported on internal rules allowing AI chatbots to speak to minors in sexually suggestive ways.
The company appears to be anticipating that the move will disappoint its teen user base: Characters.ai said in a company statement that it is “deeply sorry” to eliminate “a key feature of our product” that most teens use “within the bounds of our content rules.”
Of course, it’s still theoretically possible for an underage user to bypass these new-age assurance measures, Anand pointed out. The Verge“Generally, is there a case where a person can always bypass all possible age checks, including authentication? The answer is always yes.” The goal is better, not 100 percent, age verification accuracy, he said. Character.AI had some age-related protections, such as not allowing a user to change their age after sign-up or create a new account with a different age.
While general-purpose chatbots like ChatGPT and Gemini are overwhelmingly attracting younger users, “companion chatbot” services created to help users build relationships with virtual characters are often 18-plus. But Character.AI wasn’t launched with an age limit just for adults, and its exclusive focus on fans made it popular among teens.
Now, Character.AI is also, at least initially, founding and funding an independent nonprofit called the AI Safety Lab. The organization will focus on AI issues related to the entertainment industry, which Anand said, faces different problems than other AI sectors. The nonprofit will initially be staffed by Character.AI employees, but Anand said the goal is “to make this an industry partnership, not a character entity.” He said details about the founding partners and outside members of the company will be made available in the coming weeks or months.


