EU AI Act Article 13: Transparency Obligations for High-Risk AI
Article 13 requires high-risk AI systems to be transparent and provide information to users. Learn the six transparency obligations and how to document compliance.
If your AI system is classified as high-risk under the EU AI Act, Article 13 mandates that it must be "sufficiently transparent to enable users to interpret the system's output and use it appropriately." This is not a soft recommendation — it's an enforceable obligation with fines up to €35 million or 6% of global turnover for non-compliance.
Most companies underestimate Article 13. They assume transparency means "add a disclaimer" or "show confidence scores." In reality, Article 13 requires six distinct categories of information, each with specific documentation requirements.
This guide breaks down what Article 13 actually requires, common compliance gaps, and how to build transparency into your high-risk AI system before the August 2, 2026 enforcement deadline.
What Article 13 Actually Requires
Article 13 mandates that high-risk AI systems must provide users with information that is:
- Concise, complete, correct, and clear — no jargon, no ambiguity
- Relevant and accessible — tailored to the user's role and technical literacy
- Sufficient to enable users to interpret the output — users must understand what the system is telling them and why
- Sufficient to enable users to use the system appropriately — users must understand when to trust the output and when to override it
The regulation specifies six categories of information that must be provided:
- Identity and contact details of the provider
- Characteristics, capabilities, and limitations of performance — including accuracy, robustness, and known failure modes
- Changes to the system and its performance — version history and updates
- Level of accuracy, robustness, and cybersecurity — quantitative metrics
- Known or foreseeable circumstances that may lead to risks — edge cases and failure modes
- Human oversight measures — what the human operator is expected to do
Each of these must be documented and made available to users. If you deploy a high-risk AI system without this information, you're non-compliant.
The Six Information Categories in Detail
1. Identity and Contact Details of the Provider
This is the simplest requirement: users must know who built the system and how to contact them.
What auditors look for:
- Provider name, address, and contact email displayed in the system UI or documentation
- Clear identification of the legal entity responsible for compliance
Common failure mode: Deploying a system with no provider identification or burying contact details in a 50-page terms-of-service document.
2. Characteristics, Capabilities, and Limitations of Performance
Users must understand what the system can and cannot do. This includes:
- Intended purpose — what the system is designed for
- Performance characteristics — accuracy, latency, throughput
- Known limitations — tasks the system cannot perform reliably
What auditors look for:
- A written specification of intended purpose and out-of-scope use cases
- Performance benchmarks (e.g., "92% accuracy on validation set")
- Documentation of known failure modes (e.g., "performs poorly on handwritten text")
Common failure mode: Providing only marketing claims ("state-of-the-art accuracy") without quantitative performance data or documented limitations.
3. Changes to the System and Its Performance
Users must be notified when the system is updated and how its performance has changed.
What auditors look for:
- Version history with release notes
- Performance comparison before and after updates
- Notification mechanism for users (e.g., email, in-app alert)
Common failure mode: Silently updating models without notifying users or documenting performance changes.
4. Level of Accuracy, Robustness, and Cybersecurity
Article 13 explicitly requires quantitative metrics for:
- Accuracy — precision, recall, F1, or domain-specific measures
- Robustness — performance under adversarial inputs or distribution shift
- Cybersecurity — resistance to data poisoning, model extraction, or adversarial attacks
What auditors look for:
- Test set performance reports with confidence intervals
- Robustness benchmarks (e.g., performance on out-of-distribution data)
- Cybersecurity audit reports or penetration test results
Common failure mode: Reporting only aggregate accuracy without breaking down performance by demographic group, edge case, or adversarial scenario.
5. Known or Foreseeable Circumstances That May Lead to Risks
Users must be warned about situations where the system is likely to fail or produce unsafe outputs.
What auditors look for:
- A documented list of edge cases and failure modes
- Risk mitigation guidance (e.g., "Do not use this system for medical diagnosis")
- Evidence that users are trained on these limitations
Common failure mode: Providing no failure mode documentation or assuming users will "figure it out."
6. Human Oversight Measures
Article 14 mandates human oversight for high-risk AI systems. Article 13 requires that users be informed about what oversight actions they are expected to take.
What auditors look for:
- Documentation of the human operator's role (e.g., "Review all flagged cases before final decision")
- Training materials for human operators
- Evidence that the system supports oversight (e.g., explainability features, override mechanisms)
Common failure mode: Deploying a fully automated system with no documented human oversight role.
Article 13 Compliance Checklist
| Information Category | Documentation Needed | Common Gap |\n|---|---|---|\n| Provider identity | Name, address, contact email in UI/docs | No provider identification |\n| Characteristics, capabilities, limitations | Intended purpose, performance benchmarks, failure modes | Marketing claims without quantitative data |\n| Changes and updates | Version history, release notes, user notifications | Silent updates with no notification |\n| Accuracy, robustness, cybersecurity | Test set reports, robustness benchmarks, security audits | Aggregate accuracy only, no edge case breakdown |\n| Known risks and failure modes | Edge case list, risk mitigation guidance | No failure mode documentation |\n| Human oversight measures | Operator role, training materials, override mechanisms | No documented oversight role |\n
How Article 13 Interacts with Other Requirements
Article 13 does not exist in isolation. It intersects with:
- Article 9 (Risk Management) — the risks you identify in Article 9 must be disclosed to users under Article 13
- Article 10 (Data Governance) — the data quality metrics you document under Article 10 inform the accuracy disclosures required by Article 13
- Article 14 (Human Oversight) — the oversight measures you design under Article 14 must be explained to users under Article 13
- Article 52 (Transparency for Certain AI Systems) — if your system is also subject to Article 52 (e.g., chatbots, emotion recognition), you have additional transparency obligations
A complete compliance strategy addresses all of these together, not as isolated checklists.
Concrete Example: Credit Scoring System
Suppose you've built an AI-powered credit scoring system. Under Annex III.5(b), this is a high-risk system. Here's what Article 13 compliance looks like:
- Provider identity: The system UI displays "Provided by FinTech Corp, 123 Main St, Dublin, Ireland. Contact: compliance@fintechcorp.eu"
- Characteristics, capabilities, limitations: You document that the system is designed for consumer credit decisions up to €50,000, achieves 89% accuracy on validation data, and performs poorly for applicants with thin credit files (fewer than 3 tradelines).
- Changes and updates: When you update the model, you send an email to all users with a link to release notes showing the new accuracy (91%) and changes in false positive/false negative rates.
- Accuracy, robustness, cybersecurity: You provide a performance report showing precision, recall, and F1 by demographic group, plus robustness testing results showing performance under adversarial inputs (e.g., applicants who deliberately misreport income).
- Known risks: You document that the system may underestimate risk for self-employed applicants and overestimate risk for recent immigrants. You provide guidance: "Manually review all self-employed and recent immigrant applications."
- Human oversight: You document that loan officers must review all applications flagged as "borderline" (score 600–650) and have the authority to override the system's recommendation.
All of this is packaged into a User Information Document that is provided to every loan officer who uses the system. When an auditor asks for Article 13 evidence, you hand them this document plus training records showing that loan officers have been trained on it.
What Happens If You Don't Comply
Non-compliance with Article 13 can trigger:
- Administrative fines up to €35 million or 6% of global turnover (whichever is higher)
- Market surveillance actions — national authorities can order you to withdraw your system from the market or suspend its use
- Liability exposure — if a user misuses your system because you failed to provide adequate information, you may be liable for resulting harms
The enforcement timeline is fixed: August 2, 2026. That's 85 days from today. If you're deploying a high-risk AI system in the EU, you need Article 13 compliance documentation now.
Common Anti-Patterns Vigilia Detects
Vigilia's EU AI Act audit flags these Article 13 anti-patterns:
- No user-facing documentation — the system has no UI or documentation explaining its purpose, limitations, or performance
- Marketing claims without quantitative data — the system claims "high accuracy" but provides no test set metrics
- No failure mode documentation — users are not warned about edge cases or situations where the system is likely to fail
- No human oversight guidance — users are not told what oversight actions they are expected to take
- Silent updates — the system is updated without notifying users or documenting performance changes
- No provider identification — users do not know who built the system or how to contact them
Each anti-pattern is mapped to a fine exposure estimate and a remediation roadmap.
How to Get Compliant in 20 Minutes
Vigilia's EU AI Act audit generates an Article 13 gap analysis in 20 minutes. You answer questions about your transparency documentation, user information, and oversight measures. Vigilia maps your answers to Article 13 requirements and flags gaps.
The output is an audit-ready PDF covering:
- Article 13 compliance score (0–100)
- Specific gaps (e.g., "No documented failure modes")
- Remediation roadmap with estimated effort
- Fine exposure estimates for each gap
Traditional compliance audits cost €5,000–€40,000 and take 1–3 months. Vigilia costs €499 and takes 20 minutes.
Generate your Article 13 compliance report now: www.aivigilia.com
This article is for informational purposes only and does not constitute legal advice. Consult a qualified EU AI Act attorney for guidance on your specific situation.
Ready to check your own AI system against the EU AI Act?
Get your compliance report in 20 minutes, not 3 months.
Start free audit →