Anthropor makes some major changes on how to process user data, which requires all Claude users to decide by September 28 whether they want to use their conversations to train artificial intelligence models. While the company directed us to Blog post On the changes in politics when asked about what pushed this step, we have formed some of our theories.
But first, what changes: in the past, Antarubor did not use consumer chat data to train on models. Now, the company wants to train AI systems on user conversations and coding sessions, and said it extends to keeping data to five years for those who do not penetrate.
This is a huge update. Previously, consumable users of anthropology have been informed that their claims and outputs will be automatically deleted from the back end of anthropology within 30 days, “unless the outputs and outputs of users are kept for up to two years.
By the consumer, we mean that the new policies apply to the Claude Free, Pro and Max users, including those who use Claude Code. Business customers who use Claude Gov, Claude for Work, Claude for Eduction or API, which protects Openai, likewise, will likewise enter institutional customers from data training policies.
Why does this happen? In this post on the update, the human frames are changing the user’s choice, saying that by not canceling the subscription, users will help us to improve the safety of the models, and make our systems to discover harmful content more accurate and less display of non -harmful conversations. “Users will also help future CLADE models improve skills such as coding, analysis and thinking, which ultimately leads to better models for all users.”
In short, help us help you. But the full truth may be less selfish.
Like each other big linguistic company, Antarbur needs more data than you need to have mysterious feelings about its brand. Artificial intelligence training models require huge quantities of high -quality conversation data, and access to millions of Clauds reactions should provide the type of content in the real world that can improve the competitive sites of anthropology against competitors such as Openai and Google.
TECHRUNCH event
San Francisco
|
27-29 October, 2025
In addition to the competitive pressures for the development of artificial intelligence, the changes also appear to reflect the broader industry transformations in data policies, as companies such as anthropor and Openai face increased scrutiny of data retaining practices. For example, Openai is currently fighting an order from the court forcing the company to keep all ChatgPT talks indefinitely, including deleted chats, for a lawsuit filed by the New York Times and other publishers.
In June, Openai Coo Brad LightCap described this A comprehensive and unnecessary request“This” mainly contradicts the privacy obligations that we have made on our users. “The court’s order affects free Chatgpt users, as well as Pro and team users, although institution agents and those who have zero data retaining agreements are still protected.
What is anxious How much confusion All these changing use policies are created for users, and many of them remain unaware of them.
In fairness, everything moves quickly now, so with technology change, privacy policies must change. But many of these changes are somewhat comprehensive and mentioned only in transit amid news of other companies. (Do not think that Tuesday’s policy changes for anthropologists were very big news based on where the company put this update on its press page).

But many users do not realize that the guidelines they agreed have changed because the design is practically guaranteed. Most Chatgpt users continue to click to switch “delete” that does not delete anything technically. Meanwhile, the implementation of the anthropologist for its new policy follows a familiar pattern.
How is that? New users will choose their preferences during subscription, but the current users face a pop-up window with “updates and policies of consumer” in a large text and a prominent black “acceptance” button with a much smaller switching key to training training permissions below in the smaller print-and automatically set on “on”.
As noted previously Today through Verge, the design raises fears that users may quickly click “accept” without noting that they agree to share data.
At the same time, the risk of user awareness cannot be higher. Privacy experts have long warned that the complex surrounding AI makes the user’s approval of significantly cannot be achieved. Under the Biden Administration, the Federal Trade Committee interfered, warning The artificial intelligence companies risk enforcing work if they participate in “changing the conditions of service or the privacy policy in a hidden way, or burying the disclosure behind the hyperlinks, in legal, or in precision printing.”
Whether the committee – it is now working with only three Among its five commissioners – he still monitors these practices today is an open question, putting directly in FTC.
https://techcrunch.com/wp-content/uploads/2024/12/Claude-ad-e1733259907871.jpg?resize=1200,800
Source link