Anthropic (Claude) vs OpenAI (ChatGPT)
Based on our analysis, Anthropic (Claude) is the more privacy-respecting choice overall.
BACK →| Category | Anthropic (Claude) | OpenAI (ChatGPT) |
|---|---|---|
| Overall | B · 72/100 | D · 42/100 |
| What they collect | Mixed (65) | Concern (38) |
| Who they share it with | Positive (78) | Mixed (48) |
| What you can do | Positive (78) | Concern (42) |
| What they promise | Positive (82) | Mixed (52) |
Anthropic collects identity and account data, all prompts and responses, and coding sessions. Consumer users can opt in to having conversations used for model training with data retained up to 5 years. API and commercial customers are unaffected: their data is never used for training. With training off, 30-day retention for safety then deleted. No advertising business; data never sold. Dedicated Privacy Center at privacy.claude.com.
View full analysis →OpenAI collects account data, all prompts and responses, file uploads, voice inputs, and a separate Memory that persists even when you delete chats. Training on your conversations is on by default; you must opt out. A federal court order (May 2025) requires OpenAI to preserve and segregate ChatGPT conversation data — including deleted conversations. API and Enterprise: training is off; your data is never used for training. OpenAI states they don't sell personal data or use it for targeted advertising.
View full analysis →