Documentation Index
Fetch the complete documentation index at: https://docs.rootkey.ai/llms.txt
Use this file to discover all available pages before exploring further.
The Problem
AI systems - particularly those used in high-stakes decisions - face a growing set of accountability obligations. Regulators, auditors, and affected parties increasingly ask questions that organisations cannot currently answer:- Which version of the model produced this decision?
- What data was the model trained on, and when was that dataset approved?
- Was the human oversight step actually performed, or was it bypassed?
- Has the model been modified since its conformity assessment was completed?
- What did the system log at the time of a disputed decision - and can that log be trusted?
How ROOTKey Solves It
ROOTKey anchors AI system artifacts and events to the blockchain at the moment they are created - model releases, dataset manifests, conformity assessments, decision logs, human oversight records. Each anchor is:- Timestamped by blockchain consensus - the timestamp cannot be backdated or altered
- Cryptographically bound to the artifact - any modification breaks the integrity link
- Independently verifiable - regulators, auditors, and affected parties can verify records without accessing your systems or trusting your assurance
Architecture
Implementation
Create vaults per AI system and lifecycle stage
Organise vaults by AI system and stage: one vault for model artifacts, one for training data provenance, one for decision logs, one for conformity documentation. This enables scoped access for different auditors and regulators.→ Create Vault
Anchor training data manifests at approval
When a training dataset is approved for use, anchor its manifest - a hash of the dataset or a structured record of its composition, sources, and governance approval. This creates a tamper-evident record of what the model was trained on.→ Create File · Records API
Anchor model artifacts at release
At the point the model artifact is produced - whether a weights file, a container image, or a packaged inference service - anchor its hash. Any modification to the model after this point is detectable before deployment.→ Create File Version
Anchor conformity assessment documentation
For high-risk AI systems under the EU AI Act, conformity assessments must be documented before deployment. Anchor the assessment document and its outcome at approval - creating tamper-evident proof that assessment was conducted and what it concluded.→ Create File
Anchor decision logs and human oversight records
For each AI decision event (or each batch), anchor the decision log. For high-risk systems requiring human oversight, anchor the oversight decision record - including whether the operator accepted, overrode, or escalated the AI output.→ Records API · Tables API
Monitor and validate with Analytics
Use the Analytics API to verify that anchoring is continuous across the deployment lifecycle. Gaps in coverage indicate periods where AI decisions were made without a tamper-evident log - a potential compliance gap under EU AI Act Article 12.→ Analytics - Files vs Validations
Recommended Configuration
| Parameter | Recommendation |
|---|---|
| Protocol | RKP-1 (Full On-Chain) for conformity assessments and model release artifacts; RKP-3 (Hybrid) for high-volume decision logging |
| Deployment | API Integration for cloud-native ML pipelines; Container for self-hosted inference infrastructure |
| Data sovereignty | High-risk AI systems processing EU personal data should use EU-sovereign deployment (EBSI + OVH) to satisfy GDPR and EU AI Act combined requirements |
| Anchor granularity | Anchor each model version separately - do not overwrite; retain full version history for post-market audit |
| Decision logs | For very high-frequency inference, anchor decision logs in batches with a merkle root - individual decision integrity is preserved while anchoring overhead scales |
Key API Endpoints
| Endpoint | Purpose |
|---|---|
| Create Vault | Vaults per AI system, lifecycle stage, or regulatory scope |
| Create File | Anchor model artifacts, datasets, and conformity documents |
| Create File Version | Anchor new model versions - full version history retained |
| Records API | Structured decision logs with per-record integrity |
| Tables API | Queryable, schema-validated AI event records |
| Validate File | Verify a model artifact has not been modified since anchoring |
| Get File History | Full version and audit history for a model or dataset |
Compliance Alignment
| Framework | How this use case addresses it |
|---|---|
| EU AI Act | Article 9 (risk management records), Article 11 (technical documentation), Article 12 (automatic logging), Article 14 (human oversight records), Article 61 (post-market monitoring logs) |
| GDPR | Article 22 (automated decision-making) - tamper-evident record of which model version made each decision and whether human review was conducted |
| ISO 42001 | AI management system audit trail and records |
| NIST AI RMF | Govern and Measure functions - tamper-evident documentation of AI governance decisions |
| NIS2 | For AI used in critical infrastructure - integrity of the AI system itself as an ICT asset |
| DORA | For AI used in financial services - model risk management documentation and audit evidence |
Request an AI governance architecture review
We’ll map your AI systems’ risk classification and regulatory obligations to a concrete ROOTKey implementation - including EU AI Act conformity documentation architecture.
Get started - free account
Create a sandbox vault and anchor your first model artifact or decision log in minutes.

