Opt-out required: Anthropic will use user data for training in future

Anthropic introduces changes to its terms of use: chat protocols and coding sessions will be used to train the Claude AI model in future.

listen Print view
Outline of a human head on an orange background; scribbles in the head

(Image: Anthropic)

3 min. read

At first glance, it is a small change to the terms of use, but it has major implications: AI provider Anthropic wants to use chat transcripts and code sessions from the consumer versions of Claude to train new AI models in future. Until now, Anthropic has distinguished itself by not doing this, unlike its competitors. However, users can object to the planned use of the data. The changes affect Claude Free, Pro and Max as well as the use of Claude Code.

Alongside the use of data for training, Anthropic is extending the data retention period from 30 days to five years for users who consent to the use of data. The company justifies this drastic extension with the lengthy development cycles of AI models: Models that are published today began their development 18 to 24 months ago. The consistent database over longer periods of time should lead to more stable and predictable model behavior.

The extended storage period only affects new or continued chat and coding sessions. Existing conversations remain unaffected unless the user continues them. If individual chats are deleted, they will not be included in future training cycles, according to Anthropic.

Videos by heise

Commercial services such as Claude for Work, Claude Gov, Claude for Education and API usage via third-party providers such as Amazon Bedrock or Google Cloud's Vertex AI are expressly excluded from the new regulations. These enterprise customers should continue to be able to trust that their data will not be used for training purposes.

Existing users have until September 28, 2025 to decide on their preferences. The setting can be changed at any time in the data protection options and will then apply to new chats. New users will be asked to make a decision during the registration process. Anthropic emphasizes that users always retain control over this setting.

To protect privacy, Anthropic says it uses a combination of tools and automated processes to filter sensitive data or make it unrecognizable. The company assures that it does not sell user data to third parties.

The decision reflects the intense competition in the AI market. Data from real interactions provides valuable insights into which answers are most helpful and accurate for users, explains Anthropic. This feedback is crucial for model improvement, especially when programming assistance is provided by AI.

(mki)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.