Anthropic is making some major changes how this user handles data, all cloud users need to decide by 28 September whether they want their interaction to train AI model. While the company directed us for this blog post When asked about this step, on policy changes, we have formed some of our own principles.
But first, what is changing: First, anthropic did not use consumer chat data for model training. Now, the company wants to train its AI system on the user conversation and coding sessions, and said it is extending data retention for five years that do not leave out.
It is a huge update. Earlier, users of anthropic consumer products were told that their signs and conversation outputs would automatically be removed from the back end of the anthropic within 30 days “until it is necessary to hold legally or the policy for a long time” or their input was demonstrated as violating their policies, in this condition the user can be maintained for two years.
By the consumer, we mean new policies apply to Cloud Free, Pro and Max users, including using cloud codes. Professional customers using Cloud Gove, Cloud for Work, Claud for Education, or API access will be unaffected, which similarly protects the OpenII from data training policies to enterprise customers.
So why is this happening? In that post about the update, the anthropic user’s choice frames the changes around the user, saying that by not getting out, the users “will help us improve the model safety, which will reduce our system to detect harmful materials to detect more accurate and harmful interactions.” The user will help the future cloud models improve skills such as coding, analysis and argument, eventually leading to better models for all users. “
In short, help you help us. But the complete truth is probably a little less selfless.
Like every other major language model company, the anthrics require more data than the data requiring data, which people need fuzzy feelings about their brand. The training AI model requires large amounts of high-quality correlative data, and reaching millions of cloud interactions should provide real-world content properly that can improve the competitive position of anthropic against rivals such as openi and Google.
Techcrunch event
San francisco
,
27-29 October, 2025
Beyond the competitive pressures of AI development, changes will also seem to reflect extensive industry changes in data policies, as companies such as anthropic and OpenIAI increase their investigation on their data retention practices. For example, Openai is currently fighting with a court order, which forces the company to maintain all consumer chat conversations indefinitely, including deleted chats, due to the trial filed by the New York Times and other publishers.
In June, Openai Coo Brad Licap called it ” Comprehensive and unnecessary demand“This” conflicts with the privacy commitments made for our users fundamentally. “The court order chatgpt free, plus, pro and team users, although enterprise customers and people with zero data retention agreements are still preserved.
What is worrying that all these changing use policies have confusion for users, many of whom are unaware of them.
In fairness, everything is now moving quickly, so as technical changes occur, privacy policies are bound to change. But many of these changes are quite widespread and only among other reports of companies are mentioned fleeting. (You don’t think there were very big news for anthropic users, there were huge news in Tuesday’s policy, where the company placed this update on its press page.)

But many users do not realize the guidelines they have agreed, as the design practically guarantees it. Most Chatgpt users keep clicking on the “delete” togle that do not remove anything technically. Meanwhile, anthropic implementation of its new policy follows an familiar pattern.
How? The new users will select their priority during the signup, but the current users will face a pop-up with “updates for consumer conditions and policies” in large texts and in small prints the bottom training permissions for permissions a major black “accept” button-and automatically set on “on”.
As seen First Today by The Verge, the design enhances concerns that users can quickly click “accept”, without paying attention that they agree to share the data.
Meanwhile, the bets for user awareness may not be high. Privacy experts have long warned that the complexity around AI almost makes meaningful user consent. Under the Biden administration, the Federal Trade Commission also stepped, alert AI companies risk enforcement action if they “changing their terms of service or privacy policy, or behind the hyperlinks, legally, or a disclosure in fine prints.”
What Commission – Now working with the bus Three Of its five commissioners – still these practices have their eyes on these practices, today there is an open question, which we have kept directly in FTC.