About
The story behind Privacy Decoded
Privacy Decoded exists because almost nobody reads privacy policies, even though they're legally binding. We use AI to translate dense legal language into plain English and give every policy a transparent score you can actually trust.
Our mission
98% of people never read privacy policies. They're long, intentionally vague, and written by lawyers for lawyers. But every time you click "I agree", you're giving companies very real permissions over your data.
Privacy Decoded turns those documents into something you can scan in under a minute: a single letter grade, a numeric score out of 100, and a structured breakdown of what's good, what's risky, and where you have real control.
How it works
We take thousands of words of legal fine print and turn them into a simple grade, clear red flags, and a plain-English summary you can act on — in under a minute.
The process
Paste policy text
Copy the privacy policy from the app or website and paste it into the analyser. Sign in is required to run an analysis.
We parse and analyse
The text is cleaned and split into sections — what they collect, who they share with, your controls, and what they promise. AI scores each part for invasiveness, clarity, and how much say you actually have.
Get your grade and breakdown
You get a single letter grade (A–F), a score out of 100, and a plain-English breakdown of what's good, bad, or unclear. No legalese.
See alternatives
When there are more privacy-respecting options, we highlight them so you can switch with confidence.
How we grade policies
Curated company analyses on Privacy Decoded use one overall score out of 100 (higher is better) and a letter from A to F. The number is what sorts the leaderboard; the letter should always match the score using the table below so grades stay comparable across companies.
Under that overall score are four pillars, each with its own 0–100 score and a set of findings tagged good, neutral, or bad. The overall score is a holistic read of those pillars — not always a strict weighted average — but new and updated reports follow the same letter bands.
Letter grades and scores
| Grade | Score range | In plain terms |
|---|---|---|
| A | 85–100 | Strong defaults, narrow collection, meaningful control, and clear commitments. |
| B+ | 78–84 | Generally respectful practices with some gaps or product-specific caveats. |
| B | 71–77 | Acceptable for many users, but notable risks or vague areas remain. |
| B- | 65–70 | Below average — several concerns or uneven protections across features. |
| C+ | 58–64 | Mediocre: meaningful positives mixed with invasive collection, sharing, or weak assurances. |
| C | 51–57 | Poor defaults or broad data use; controls exist but are limited or hard to use. |
| C- | 44–50 | Significant issues across multiple pillars; hard to recommend without caveats. |
| D | 25–43 | Highly invasive, opaque, or user-hostile patterns dominate the policy. |
| F | 0–24 | Among the worst we cover: extreme collection/sharing, or very weak control and accountability. |
The four pillars
| Pillar | What we look for |
|---|---|
| Collection | What data is gathered (on- and off-platform), how invasive it is, whether inference or sensitive categories appear, and whether collection is limited to what the service reasonably needs. |
| Sharing | Who receives data (affiliates, advertisers, vendors, governments), sale or sale-like flows, cross-product combination, and how much users can see or predict about downstream use. |
| Control | Practical rights and settings: access, deletion, portability, opt-outs for ads and non-essential processing, defaults, and how discoverable those controls are. |
| Promises | Security posture, retention specificity, international transfers, AI training and legitimate-interest claims, enforcement / transparency reporting, and whether the policy is concrete or mostly boilerplate. |
The live analyser walks through the same themes (what they collect, who gets it, your controls, and what they commit to). Its section labels may differ slightly from the four pillar names above, but the intent matches.
We publish the bands in code (src/lib/grading-guide.ts) so they stay in sync with this page. If we change methodology, we update that file, this section, and any affected reports together.
Who we are
Privacy Decoded is an independent project built in Australia. There are no ads, no sponsored grades, and no dark patterns. Our analyses are always editorially independent — affiliate relationships never influence grades or findings. Where we recommend privacy-respecting alternatives, we may earn a small commission if you choose to use them.
The goal is simple: make it possible for anyone to understand, in under a minute, what they're really agreeing to — and to highlight more privacy‑respecting alternatives when they exist.
Want to suggest a company to analyse next? Leaderboard or keep an eye on the homepage for new additions.