Anthropic's Privacy Policy Update: User Conversations Now Default to AI Training

Anthropic's updated privacy policy now defaults to using user chats for AI training, requiring opt-out for privacy.

    Key details

  • • Anthropic allows default data usage for AI training
  • • Users must opt-out to prevent data inclusion
  • • Data retention increased to five years
  • • Exemptions for commercial and educational accounts

Anthropic has made significant changes to its privacy policy, allowing the default use of user conversations with its AI model, Claude, for training purposes. This new directive impacts both free and paid users, as any interactions starting from October 8, 2025, will be utilized unless users opt out. Previously, Anthropic maintained a policy of not using chat data for training, but this shift in approach marks a notable turnaround aimed at enhancing AI model performance.

Under the new policy, users must take proactive steps to prevent their data from being included in training datasets. The opt-out process requires users to navigate to the Privacy Settings within the Claude interface and disable a specified toggle. This change also lengthens the data retention period for user conversations from 30 days to five years, resulting in longer storage of chat interactions.

In addition, all conversations, including those providing programming assistance, will automatically be incorporated into training unless users explicitly opt out. Commercial, government, and educational accounts are exempt from this requirement, making this policy especially relevant to general consumers and individual users. As the implications of this policy unfold, users are urged to review their settings to safeguard their privacy.