← InsightsReportPrivacyMAR 2026 · 11 MIN

AI Privacy Reviews: A Practical Framework

How to assess privacy risks from AI systems, machine learning models, and generative AI workflows.

Lena HofmannNadia Khoury
Privacy · Report
In this piece

The argument at a glance.

How to assess privacy risks from AI systems, machine learning models, and generative AI workflows.

01What to assess
02Building the review workflow

AI systems create new privacy risks that traditional privacy assessments were not designed to address. Training data, model inputs and outputs, automated decisioning, sensitive attributes, and AI vendors all require structured privacy review.

What to assess

  • Training data: where it came from, what personal data it contains, and what consent applies.
  • Model inputs and outputs: what personal data flows through the model at inference time.
  • Automated decisioning: whether decisions affect individuals and whether human review is required.
  • Sensitive attributes: whether the model processes or infers protected categories.

Building the review workflow

AI privacy reviews should be structured, repeatable, and embedded into the AI development lifecycle — not bolted on as a compliance exercise after launch.

Keep reading

More from the lab.

All insights
ReportMAY 2026

The State of Data Governance 2026

How leading enterprises are moving from passive catalogs to active governance control planes — and the architecture patterns behind it.

BriefingAPR 2026

Why Privacy Operations Need a System of Record

Privacy programs are still running on spreadsheets and tickets. Here's what an operational privacy platform looks like.

ResearchAPR 2026

Identity Risk: The New Security Perimeter

Why stale users, orphaned admins, and toxic permissions are a bigger attack vector than unpatched CVEs.