Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.rootkey.ai/llms.txt

Use this file to discover all available pages before exploring further.

Overview

The EU Artificial Intelligence Act (EU 2024/1689) is the world’s first comprehensive legal framework for artificial intelligence. It entered into force in August 2024 and applies a tiered, risk-based approach - imposing the most stringent requirements on AI systems used in high-stakes contexts. For providers and deployers of high-risk AI systems, the Act creates extensive documentation, logging, and audit obligations that must be met before deployment and maintained throughout the system’s operational lifetime. ROOTKey addresses the core evidentiary challenge of the AI Act: the requirement to demonstrate, with verifiable evidence, that an AI system was developed, assessed, deployed, and monitored in accordance with the regulation - and that the records supporting that demonstration have not been altered after the fact.

Risk Classification

Risk tierScopeROOTKey relevance
Unacceptable riskProhibited systems (social scoring, real-time biometrics in public spaces, etc.)Prohibition compliance evidence
High riskBiometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justiceFull logging, conformity, and audit obligations - primary ROOTKey target
Limited riskChatbots, deepfake generators - transparency obligationsInteraction logs, disclosure records
Minimal riskSpam filters, AI-enabled gamesNo mandatory obligations
High-risk AI systems are listed in Annex III of the Act, including:
  • Biometric identification and categorisation systems
  • AI used in critical infrastructure management
  • AI in education (student assessment, admission)
  • AI in employment (recruitment, performance evaluation)
  • AI in access to essential services (credit scoring, insurance)
  • AI in law enforcement (crime prediction, evidence evaluation)
  • AI in border control and migration
  • AI in administration of justice

Article-Level Coverage

Article 9 - Risk Management System

Article 9 requires providers to establish and maintain a documented risk management system throughout the AI system’s lifecycle.
RequirementROOTKey capability
Risk identification and analysis documentedAnchor risk assessment documents at each review - tamper-evident proof of what was assessed and concluded
Risk management measures recordedAnchor mitigation decisions - blockchain timestamp proves when measures were adopted
Residual risk evaluation documentedAnchor residual risk acceptance records at approval
System updated post-deploymentAnchor post-deployment risk review records - continuity of the risk management lifecycle

Article 10 - Data and Data Governance

Article 10 requires training, validation, and testing data to be subject to documented governance practices.
RequirementROOTKey capability
Training data provenance documentedAnchor dataset manifests at approval - tamper-evident record of what data was used
Data quality assessment conductedAnchor data quality assessment records and outcomes
Known biases documentedAnchor bias assessment records - independently timestamped evidence of governance diligence
Data governance practices maintainedAnchor data governance policy versions - verifiable history of which policy was in force at each training cycle

Article 11 - Technical Documentation

Article 11 requires providers to draw up technical documentation before placing a high-risk AI system on the market. That documentation must be kept up to date throughout the system’s lifetime.
RequirementROOTKey capability
Technical documentation prepared before deploymentAnchor documentation at completion - blockchain timestamp proves documentation existed before market placement
Documentation updated for each significant changeAnchor each version - tamper-evident version history; each update provably post-dates the previous
Documentation provided to authorities on requestVault ID and file IDs provide authorities with independently verifiable access - no ROOTKey cooperation required for verification

Article 12 - Record-Keeping

Article 12 requires high-risk AI systems to automatically log events throughout their operation - to the extent necessary to ensure post-market monitoring and investigation of incidents.
RequirementROOTKey capability
Automatic logging of relevant eventsAnchor decision logs at emission - before they reach any mutable storage
Logs protected from modificationBlockchain anchoring - any modification after anchoring produces a detectable hash mismatch
Logs retained for appropriate periodOn-chain anchors are permanent; off-chain log retention configured per regulatory obligation
Logs accessible to providers and authoritiesVault records queryable via API; verifiable by authorities via Polygonscan or EBSI explorer without system access
Article 12 is the strongest ROOTKey alignment in the EU AI Act. The requirement for tamper-evident, automatically generated logs that cannot be modified - and that are accessible to authorities - is precisely what blockchain anchoring provides structurally, not by policy.

Article 13 - Transparency and Provision of Information

Article 13 requires providers to ensure high-risk AI systems are sufficiently transparent to allow deployers to interpret and use outputs correctly.
RequirementROOTKey capability
System capabilities and limitations documentedAnchor model cards and system cards at each version - tamper-evident documentation of what the system can and cannot do
Performance metrics documentedAnchor evaluation results - independently timestamped evidence of performance at the time of assessment
Instructions for use anchoredAnchor instructions for use - version-controlled, tamper-evident

Article 14 - Human Oversight

Article 14 requires high-risk AI systems to be designed to allow effective human oversight, and requires deployers to implement oversight measures.
RequirementROOTKey capability
Human oversight measures implementedAnchor oversight configuration records - tamper-evident proof of what oversight was in place
Human oversight decisions loggedAnchor operator decisions (accept, override, escalate) at the time of each decision
Override and intervention records maintainedBlockchain timestamp on each human intervention - independently verifiable that oversight was actually exercised

Article 17 - Quality Management System

Article 17 requires providers to implement a quality management system covering the full AI lifecycle.
RequirementROOTKey capability
QMS documentation and recordsAnchor QMS procedures at each approval - tamper-evident version history
Testing and validation recordsAnchor test results and validation outcomes - independently timestamped
Corrective action recordsAnchor corrective action plans and closure evidence

Article 61 - Post-Market Monitoring

Article 61 requires providers to implement post-market monitoring plans and collect data from deployed high-risk AI systems.
RequirementROOTKey capability
Post-market monitoring plan documentedAnchor monitoring plan at approval - tamper-evident baseline
Monitoring data collected and retainedAnchor monitoring reports - tamper-evident record of what was observed and when
Serious incidents reported to authoritiesAnchor incident records at detection and reporting - blockchain timestamp proves reporting timeline
Plan updated based on findingsAnchor each updated plan version - verifiable evolution of the monitoring approach

Obligations by Role

RoleKey obligationsROOTKey role
Provider (develops and places on market)Art. 9–17: full documentation, conformity assessment, registrationAnchor all technical documentation, training data provenance, model artifacts, and QMS records
Deployer (uses in own operations)Art. 26: implement human oversight, maintain logs, report incidentsAnchor decision logs, human oversight records, and incident reports
ImporterArt. 23: ensure provider compliance documentation is completeAnchor supplier conformity documentation received from providers
DistributorArt. 24: verify CE marking and documentation before distributionAnchor verification records and distribution records

Conformity Assessment and CE Marking

Before a high-risk AI system can be placed on the EU market, it must undergo a conformity assessment. ROOTKey supports the evidence layer:
Assessment stageROOTKey role
Internal conformity assessment (Annex VI)Anchor completed assessment and declaration of conformity at signing
Third-party assessment (notified body)Anchor assessment report received from notified body - tamper-evident proof of third-party review
EU Declaration of ConformityAnchor the declaration at signing - blockchain timestamp proves it predates market placement
CE marking applicationAnchor the marking decision and its supporting evidence

Compliance Timeline

DateObligation
August 2024Act entered into force
February 2025Prohibited AI systems banned
August 2025GPAI model obligations apply; governance rules apply
August 2026High-risk AI system obligations fully apply
August 2027Additional high-risk systems (Annex I) obligations apply

Request an EU AI Act compliance review

We’ll classify your AI systems by risk tier, map the applicable obligations, and design a ROOTKey implementation for your logging, documentation, and conformity evidence architecture.

AI System Integrity use case

Full implementation guide for AI Act-compliant model provenance, decision logging, and human oversight records.