Microsoft vs Anthropic (Claude)
Based on our analysis, Anthropic (Claude) is the more privacy-respecting choice overall.
BACK →| Category | Microsoft | Anthropic (Claude) |
|---|---|---|
| Overall | C- · 44/100 | B · 72/100 |
| What they collect | Concern (35) | Mixed (65) |
| Who they share it with | Concern (40) | Positive (78) |
| What you can do | Mixed (58) | Positive (78) |
| What they promise | Mixed (52) | Positive (82) |
Microsoft's privacy statement covers an enormous product surface — Windows, Office, Azure, Bing, Xbox, and Copilot — and the data practices vary dramatically across them. The umbrella policy is deliberately vague, deferring almost all specifics to product-level documentation. Cross-product data combination, AI model training on your content, and employer/school access to your files and communications are the key risks most consumers don't realise they're accepting.
View full analysis →Anthropic collects identity and account data, all prompts and responses, and coding sessions. Consumer users can opt in to having conversations used for model training with data retained up to 5 years. API and commercial customers are unaffected: their data is never used for training. With training off, 30-day retention for safety then deleted. No advertising business; data never sold. Dedicated Privacy Center at privacy.claude.com.
View full analysis →