AI Assistant Privacy Grades
2 companies analysed · Sorted by privacy score
AI assistants present a unique privacy challenge: you often share sensitive information with them — work problems, personal questions, medical concerns — without thinking about where that data goes. The key questions are whether the company uses your conversations to train its models, how long prompts are retained, and whether human reviewers can read what you've typed. This is a fast-moving category; we'll add more AI tools as their market share grows.
| # | Company | Grade | Score | In plain English | |
|---|---|---|---|---|---|
| 1 | B | 72/100 | Anthropic collects identity and account data, all prompts and responses, and coding sessions. Consum…Anthropic collects identity and account data, all prompts and responses, and coding sessions. Consumer users can opt in to having conversations used for model training with data retained up to 5 years. API and commercial customers are unaffected: their data is never used for training. With training off, 30-day retention for safety then deleted. No advertising business; data never sold. Dedicated Privacy Center at privacy.claude.com. | → | |
| 2 | D | 42/100 | OpenAI collects account data, all prompts and responses, file uploads, voice inputs, and a separate …OpenAI collects account data, all prompts and responses, file uploads, voice inputs, and a separate Memory that persists even when you delete chats. Training on your conversations is on by default; you must opt out. A federal court order (May 2025) requires OpenAI to preserve and segregate ChatGPT conversation data — including deleted conversations. API and Enterprise: training is off; your data is never used for training. OpenAI states they don't sell personal data or use it for targeted advertising. | → |
How we grade·Each company is scored 0–100 across four pillars: data collection, third-party sharing, user controls, and policy promises. The overall grade maps to the score band. → Read the full methodology