Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.rootkey.ai/llms.txt

Use this file to discover all available pages before exploring further.

The Problem

AI systems - particularly those used in high-stakes decisions - face a growing set of accountability obligations. Regulators, auditors, and affected parties increasingly ask questions that organisations cannot currently answer:
  • Which version of the model produced this decision?
  • What data was the model trained on, and when was that dataset approved?
  • Was the human oversight step actually performed, or was it bypassed?
  • Has the model been modified since its conformity assessment was completed?
  • What did the system log at the time of a disputed decision - and can that log be trusted?
The common thread is provenance and integrity: the ability to prove, after the fact and under adversarial scrutiny, that an AI system operated as documented. Most organisations have no mechanism to provide this proof. Logs can be altered. Model artifacts can be replaced. Audit records can be created retroactively. ROOTKey makes that impossible.

How ROOTKey Solves It

ROOTKey anchors AI system artifacts and events to the blockchain at the moment they are created - model releases, dataset manifests, conformity assessments, decision logs, human oversight records. Each anchor is:
  • Timestamped by blockchain consensus - the timestamp cannot be backdated or altered
  • Cryptographically bound to the artifact - any modification breaks the integrity link
  • Independently verifiable - regulators, auditors, and affected parties can verify records without accessing your systems or trusting your assurance
The result is a tamper-evident audit trail for the full lifecycle of an AI system - from training data approval through deployment, monitoring, and decommissioning.

Architecture

AI System Lifecycle

        ├──► Training data approval   ──► ROOTKey anchor (dataset hash + approval metadata)

        ├──► Model training complete  ──► ROOTKey anchor (model artifact hash + version ID)

        ├──► Conformity assessment    ──► ROOTKey anchor (assessment document + outcome)

        ├──► Model deployment         ──► ROOTKey anchor (deployment record + config hash)

        ├──► Decision events          ──► ROOTKey anchor (decision log per event or batch)

        ├──► Human oversight actions  ──► ROOTKey anchor (oversight decision + operator ID)

        └──► Post-market monitoring   ──► ROOTKey anchor (monitoring report + period)

                    All anchors independently verifiable
                    Regulators can verify without your cooperation

Implementation

1

Create vaults per AI system and lifecycle stage

Organise vaults by AI system and stage: one vault for model artifacts, one for training data provenance, one for decision logs, one for conformity documentation. This enables scoped access for different auditors and regulators.Create Vault
2

Anchor training data manifests at approval

When a training dataset is approved for use, anchor its manifest - a hash of the dataset or a structured record of its composition, sources, and governance approval. This creates a tamper-evident record of what the model was trained on.Create File · Records API
3

Anchor model artifacts at release

At the point the model artifact is produced - whether a weights file, a container image, or a packaged inference service - anchor its hash. Any modification to the model after this point is detectable before deployment.Create File Version
4

Anchor conformity assessment documentation

For high-risk AI systems under the EU AI Act, conformity assessments must be documented before deployment. Anchor the assessment document and its outcome at approval - creating tamper-evident proof that assessment was conducted and what it concluded.Create File
5

Anchor decision logs and human oversight records

For each AI decision event (or each batch), anchor the decision log. For high-risk systems requiring human oversight, anchor the oversight decision record - including whether the operator accepted, overrode, or escalated the AI output.Records API · Tables API
6

Monitor and validate with Analytics

Use the Analytics API to verify that anchoring is continuous across the deployment lifecycle. Gaps in coverage indicate periods where AI decisions were made without a tamper-evident log - a potential compliance gap under EU AI Act Article 12.Analytics - Files vs Validations

ParameterRecommendation
ProtocolRKP-1 (Full On-Chain) for conformity assessments and model release artifacts; RKP-3 (Hybrid) for high-volume decision logging
DeploymentAPI Integration for cloud-native ML pipelines; Container for self-hosted inference infrastructure
Data sovereigntyHigh-risk AI systems processing EU personal data should use EU-sovereign deployment (EBSI + OVH) to satisfy GDPR and EU AI Act combined requirements
Anchor granularityAnchor each model version separately - do not overwrite; retain full version history for post-market audit
Decision logsFor very high-frequency inference, anchor decision logs in batches with a merkle root - individual decision integrity is preserved while anchoring overhead scales

Key API Endpoints

EndpointPurpose
Create VaultVaults per AI system, lifecycle stage, or regulatory scope
Create FileAnchor model artifacts, datasets, and conformity documents
Create File VersionAnchor new model versions - full version history retained
Records APIStructured decision logs with per-record integrity
Tables APIQueryable, schema-validated AI event records
Validate FileVerify a model artifact has not been modified since anchoring
Get File HistoryFull version and audit history for a model or dataset

Compliance Alignment

FrameworkHow this use case addresses it
EU AI ActArticle 9 (risk management records), Article 11 (technical documentation), Article 12 (automatic logging), Article 14 (human oversight records), Article 61 (post-market monitoring logs)
GDPRArticle 22 (automated decision-making) - tamper-evident record of which model version made each decision and whether human review was conducted
ISO 42001AI management system audit trail and records
NIST AI RMFGovern and Measure functions - tamper-evident documentation of AI governance decisions
NIS2For AI used in critical infrastructure - integrity of the AI system itself as an ICT asset
DORAFor AI used in financial services - model risk management documentation and audit evidence

Request an AI governance architecture review

We’ll map your AI systems’ risk classification and regulatory obligations to a concrete ROOTKey implementation - including EU AI Act conformity documentation architecture.

Get started - free account

Create a sandbox vault and anchor your first model artifact or decision log in minutes.