How assessments work
This feature requires Fides Cloud or Fides Enterprise. For more information, talk to our solutions team. (opens in a new tab)
This page explains the core concepts behind Fides privacy assessments: how the AI pre-fill works, what answer statuses mean, how versioning protects your audit trail, and how risk detection works.
AI pre-fill
When you generate an assessment, Fides runs a background job that:
- Reads your system's data use declarations, data categories, datasets, legal bases, and processing activities from the Fides data map.
- Builds a structured privacy context document from that data.
- Sends the context and each assessment question to an AI model to generate answers.
- Parses the response into individual question answers.
- Stores each answer together with evidence: the specific Fides data points the AI cited when forming its response.
The AI model does not have access to your raw production databases. It works entirely from the structured metadata you've already entered into Fides. The quality of AI-generated answers therefore depends directly on the completeness of your system inventory.
Evidence
Every AI-generated answer includes citations that link back to the exact Fides records the model used. Evidence items appear in the evidence drawer for each question group, organized by type:
- System-derived data: fields from the system record (name, description, department, legal basis)
- Data use declaration: the declaration name, data use, and data categories
- Data use: the description of the processing purpose
- Dataset: field-level metadata from connected datasets
Evidence provides the audit trail regulators expect: you can show that each answer is grounded in your verified data inventory, not conjecture.
Answer statuses
Each question in an assessment has one of three statuses:
| Status | Meaning |
|---|---|
| Complete | The AI found sufficient Fides data to produce a full answer, or a human has provided one. |
| System derivable | The AI produced a partial answer. The relevant Fides field exists but was not fully populated. |
| Needs input | The AI could not generate an answer. No relevant data was found in the system inventory. |
"Needs input" answers are the primary candidates for the Slack questionnaire workflow, which routes unanswered questions to subject-matter experts.
Answer sources
Every answer version carries a source label that records how it was produced:
| Source | Meaning |
|---|---|
| System derived | Pulled directly from a structured Fides field with no inference. |
| Agent | Generated by the AI model from your Fides context. |
| Manual input | Typed or edited manually by a Fides user. |
| Via Slack | Submitted by an SME via the Slack questionnaire. |
These labels appear in the answer history view alongside timestamps and the user who made each change.
Answer versioning and audit trail
Every time an answer changes, whether by the AI, by a Fides user editing inline, or by an SME responding via Slack, Fides creates a new immutable version. No version is ever deleted. This means:
- You can view the complete history of any answer at any time.
- You can revert to any previous version.
- The audit log records who changed what and when, satisfying the documentation requirements of GDPR Article 35(7) and equivalent state law provisions.
The audit log is also available at the assessment level, covering all changes across all questions in the assessment.
Risk detection
When generating assessments, Fides automatically evaluates each system's data use declarations against a set of high-risk triggers. A system is flagged as High risk when any of the following conditions are met:
- The system processes special category data (health, biometric, genetic, or financial data)
- The system involves profiling or automated decision-making
- The system processes data about vulnerable individuals (children, patients, employees)
- The system involves large-scale processing or international data transfers
- The system uses data for targeting, AI training, or collection purposes
Assessments that match one or more triggers display a High risk badge on the assessment list card and on the assessment detail header. This signals that regulatory consultation obligations (such as prior consultation under GDPR Article 36) may apply. Assessments that do not match any triggers have no risk badge.
Assessment lifecycle
An assessment moves through three statuses:
| Status | Meaning |
|---|---|
| In progress | The assessment has been generated and is being reviewed. Not all questions are complete. |
| Completed | All required questions have answers accepted by the reviewer. The PDF can be generated. |
| Outdated | System data has changed since the assessment was generated. Re-evaluation is recommended. |
An assessment becomes Outdated when Fides detects that the system's data use declarations, data categories, or datasets have changed materially since the assessment snapshot was taken. Fides compares the current system state to the stored snapshot on each load. See re-evaluating assessments for how to bring an outdated assessment current.
Question groups
Assessment questions are organized into question groups, numbered sections that correspond to the structure of the underlying regulatory template. For example, the Best Practice PIA template has eight groups: Project Overview, Data Inventory, Data Flows, Legal Basis and Compliance, Risk Assessment, Risk Mitigations, Individual Rights, and Governance and Approval.
Each question group shows:
- A completion status badge (Completed / Pending)
- A fields counter (e.g., "Fields: 3/3") showing how many questions have complete answers
- A "View evidence" button that opens the evidence drawer for that group
- A last-updated timestamp
Expanding a group reveals its individual questions and answers. Each answer shows its source badge and an Edit button.