Skip to main content

← Blog

Does Your AI Assistant Keep Your Conversations Private?

7 April 2026

ChatGPT trains on your conversations by default. Claude's default depends on which product you use. Here's what the privacy policies actually say — and what to change.

You share things with AI assistants you might not share anywhere else — medical questions you're embarrassed to ask your doctor, work problems you can't discuss with colleagues, personal decisions you're still thinking through. The question is: where does that go?

The short answer is that it depends on which product you're using and which settings you've changed. The long answer is that the defaults matter enormously, and most people have never looked at them.

We've done a full analysis of ChatGPT's privacy policy and a full analysis of Anthropic Claude's privacy policy. Here's what we found.

The default that most people don't know about

If you're using ChatGPT on a Free, Plus, or Pro plan and you've never changed your settings, OpenAI is training its next models on your conversations. This is on by default. You have to actively go to Settings → Data Controls → Improve the model for everyone and turn it off.

Claude's position is different — and has changed. As of Anthropic's updated consumer terms (August–September 2025), Anthropic asks new consumer users to make a choice about training. But some reports have suggested the default for existing accounts may be training on. Check your settings at Privacy Settings → Claude's memory & training data.

The practical difference matters: if training is on, your conversations can be used to improve future versions of the model. Anthropic retains that data for up to five years in de-identified form. OpenAI's retention varies by product tier.

The ChatGPT court order problem

There's a complication most ChatGPT users don't know about. A federal court order from May 2025 requires OpenAI to preserve and segregate all ChatGPT conversation data — including conversations users have already deleted.

When you press "delete" on a ChatGPT conversation, it disappears from your screen. It may not disappear from OpenAI's servers. This applies to historical conversations from roughly April to September 2025 and potentially beyond, depending on how the litigation develops.

This is not OpenAI's privacy policy failing — it's a legal process overriding it. But it's something users should know when deciding what to share.

ChatGPT Memory: the feature people forget to check

ChatGPT has a separate feature called Memory that maintains a record of things you've told it across conversations. If you tell ChatGPT you have diabetes, it will remember that in future chats. If you tell it your spouse's name, it keeps that too.

Memory persists even when you delete individual conversations. You have to manage it separately: Settings → Personalization → Memory → Manage. Many users don't know it exists.

Which product you use changes everything

Both OpenAI and Anthropic offer stronger privacy protections through their APIs and enterprise products than through their consumer apps. The grade gap is significant:

Product Training on your data? Retention Grade
ChatGPT (defaults) Yes, unless you opt out 30 days (subject to court order) D+
ChatGPT (training off) No 30 days (subject to court order) B-
ChatGPT Temporary Chat No 30 days (subject to court order) B-
OpenAI API (ZDR) Never Zero retention A-
Claude consumer (training on) Yes Up to 5 years (de-identified) C+
Claude consumer (training off) No 30 days, then deleted B
Claude API Never 7 days A-

What neither company does

It's worth stating clearly what both OpenAI and Anthropic share: neither sells your data to advertisers. Neither has an advertising business. Revenue comes from subscriptions and API access. This is a meaningful structural difference from companies like Google or Meta, where advertising is the core business model and data collection serves that model directly.

Both also limit employee access to conversations. Anthropic states that employees cannot access conversations by default — only the Trust & Safety team, on a need-to-know basis. OpenAI's terms are similar, with access limited to safety review and flagged content.

What to actually do

If you use ChatGPT, the single most impactful change is turning off "Improve the model for everyone" in Settings → Data Controls. Do it now — it's a 30-second change that changes the basis on which your conversations are retained.

Also check your Memory: Settings → Personalization → Memory → Manage. Review what's there and delete anything sensitive.

For sensitive conversations — medical, legal, financial — consider using Temporary Chat mode. It's not used for training and doesn't persist in your history. Be aware that the court order may still affect how "deleted" really works in the short term.

If you use Claude, check your Privacy Settings to confirm your training preference. If you're a developer, the API offers the strongest protections of any consumer AI product: 7-day retention by default, never used for training, and Zero Data Retention available for enterprise customers.

For the full policy breakdowns, see our ChatGPT privacy analysis (grade: D at defaults, B- optimised) and our Anthropic Claude privacy analysis (grade: B for consumer, A- for API). You can also compare ChatGPT and Claude side by side.

Privacy policies decoded, for free.

Browse plain-English grades for the apps you use every day. Don't see the one you need? Submit it and we'll add it.