EU AI Act Article 12: What Event Logging Actually Requires for AI Agents
The EU AI Act’s high-risk AI system obligations — including Article 12 — take effect in August 2026. For companies running AI agents in production, Article 12 is one of the most operationally demanding provisions: it requires automatic event logging for high-risk AI systems. Not optional logging. Not best-effort logging. Automatic, structured, tamper-evident event records that trace every meaningful operation.
Most companies plan to retrofit logging after the fact — intercept agent outputs, scrape tool calls, reconstruct timelines from disparate systems. For traditional software, that approach works. For AI agents, it does not. Here is why, and what actually satisfies Article 12.
What Article 12 actually says
Article 12 of Regulation (EU) 2024/1689 requires that high-risk AI systems be designed and developed with capabilities enabling the automatic recording of events (“logs”) over the system’s lifetime. These logging capabilities must conform to recognised standards or common specifications, and the logs must enable:
- Traceability— the ability to reconstruct who did what, when, and in response to what input
- Monitoring— ongoing observation of system behavior during operation
- Post-market surveillance— the ability to investigate events after they occur, with sufficient detail to understand what happened and why
The regulation also requires that logs be kept for an appropriate period, be accessible to deployers, and provide information proportionate to the system’s intended purpose.
In practical terms, Article 12 demands five things from your event logging infrastructure:
- Events are recorded automatically, not manually triggered
- Events are traceable — linked to specific inputs, decisions, and actors
- Events are ordered — sequential, with verifiable chronology
- Events are tamper-evident — modifications are detectable
- Events are exportable — accessible for audit, regulatory submission, and post-market analysis
Why retrofitting fails for agent operations
Traditional software has deterministic execution paths. You can instrument those paths and produce reliable logs. AI agents are different in three critical ways:
1. Agents lose context
An agent’s context window is ephemeral. When the window resets — after a timeout, a new session, or hitting the token limit — everything the agent “knew” about the task is gone. A new session can only reconstruct what happened by reading external records. If those records do not exist yet because you planned to add logging later, the reconstruction is a fabrication.
2. Agents reconstruct history
Ask an LLM to summarize what it did and you get a plausible narrative, not a factual record. The model generates text that looks like a log entry but is actually a post-hoc rationalization. This is the opposite of what Article 12 requires. Regulatory logs must reflect what actuallyhappened — not what the system believes happened after the fact.
3. Delegation fractures the record
When Agent A delegates to Agent B, and Agent B delegates to Agent C, the original intent is separated from the final execution by multiple context boundaries. Each agent has its own context window, its own provider, and its own interpretation of the task. Retrofitted logging captures fragments from each agent but cannot reconstruct the chain of delegation, the original acceptance criteria, or whether the final output met the original ask.
The common thread: if the structured record does not exist beforethe work begins, there is nothing reliable to log against. You are not logging events — you are generating narratives.
How structured mandates address each requirement
AGLedger takes a different approach: the mandate lifecycle is the event log. Every unit of agent work is a mandate— a structured record of what needs to be done, by when, within what bounds. The mandate is locked before work starts. The agent delivers a receipt with evidence. The principal renders a verdict. Every state transition, attestation, and verification result is recorded automatically in an append-only audit vault as a byproduct of the protocol.
Logging is not a separate system bolted on after the fact. It is a natural consequence of how agents do their work through the protocol.
Here is how each Article 12 requirement maps to what the mandate lifecycle produces:
| Art. 12 requirement | What the mandate lifecycle produces |
|---|---|
| Automatic recording | Every state transition (created, active, processing, fulfilled, failed, revision requested, cancelled) generates an audit entry automatically. No manual instrumentation required. Compliance attestations, receipt submissions, and verification outcomes are all recorded as they occur. |
| Traceability | Each audit entry identifies the acting entity, references the mandate it belongs to, and links to the specific input that triggered the event. Delegation chains preserve parentMandateId and chainDepth, so the full path from original intent to final execution is traceable. |
| Ordered chronology | Entries carry a chain_position and are SHA-256 hash-chained. Each entry’s previousHash links to the prior entry’s payloadHash. The genesis entry has a null previousHash. Sequence is cryptographically verifiable, not just timestamp-based. |
| Tamper evidence | Every entry is Ed25519-signed and hash-chained. Payloads are canonicalized using RFC 8785 (JCS) before hashing. Contiguous hash links mean any modification to a prior entry breaks the chain. Signatures are independently verifiable against the signing public key included in the export. |
| Exportability | Full audit chain exportable as structured JSON, CSV, or NDJSON. Exports include mandate metadata, all audit entries with timestamps, chain integrity verification, signing key references, and format versioning. Designed for regulatory submission and third-party audit. |
Tested end-to-end: 78 assertions, zero failures
This is not a roadmap item. The AGLedger testbed runs a dedicated EU AI Act compliance scenario that exercises every article we map to — including Article 12 specifically. The most recent run: 78 assertions, 78 passed, 0 failed.
The Article 12 assertions test:
- Creation, activation, and receipt events are logged automatically
- All entries carry payload hashes (tamper evidence)
- All entries carry chain position (ordering)
- Hash chain links are contiguous — each previousHash matches the prior payloadHash
- Genesis entry has null previousHash
- All entries are cryptographically signed (Ed25519)
- All signatures verify correctly
- All entries reference signing key ID
- Export includes canonicalization method (RFC 8785)
- Export includes signing public key and format version
- Audit trail cannot be deleted (append-only confirmed — DELETE returns 404)
The full test covers 11 articles beyond just Article 12 — risk classification (Art. 9), transparency (Art. 13), human oversight (Art. 14), accuracy and robustness (Art. 15), quality management (Art. 17), technical documentation (Art. 18), corrective actions (Art. 20), deployer obligations (Art. 26), fundamental rights impact assessment (Art. 27), and registration (Art. 49).
What AGLedger provides vs. what your organization owns
AGLedger provides the accountability infrastructure. It does not replace your organization’s judgment, policies, or regulatory decisions. This distinction matters for Article 12 compliance:
| AGLedger provides | Your organization owns |
|---|---|
| Append-only audit vault with automatic event recording | Determining which AI systems are high-risk and in scope for Article 12 |
| Ed25519-signed, SHA-256 hash-chained entries with RFC 8785 canonicalization | Key management policies and signing key rotation schedules |
| Structured exports (JSON, CSV, NDJSON) formatted for regulatory submission | Deciding retention periods and access policies for log data |
| Delegation chain tracking with parentMandateId and chainDepth | Defining delegation authority boundaries and approval workflows |
| Self-hosted on your infrastructure — your vault, your keys, your data | Infrastructure security, backup procedures, and disaster recovery |
We are deliberate about this boundary. AGLedger does not determine whether your AI system is high-risk. It does not decide what constitutes adequate logging for your domain. It provides the infrastructure that makes producing Article 12-compliant event logs a byproduct of how your agents already work — not a separate compliance project.
The design principle that makes this work
The core insight is simple: if the structured record does not exist before the work begins, you cannot produce a reliable event log after the fact. AGLedger creates the mandate beforethe agent starts working. Every subsequent action — receipt submission, verification, verdict, corrective action — is recorded against that pre-existing structure. The audit trail is not assembled retrospectively. It accumulates as the work happens.
This is the difference between “we can show you what happened” and “we captured what happened as it occurred, and the record is cryptographically intact.” Article 12 requires the latter.
Getting started
AGLedger maps to 11 articles of the EU AI Act. The full article-by-article breakdown — including what AGLedger provides and what your organization still owns for each article — is available on the EU AI Act compliance mapping page.
If you are evaluating Article 12 readiness for your AI agent operations, the fastest path is to try the live demo or download and run it on your own infrastructure. The compliance scenario tested above runs against the same API your agents would use.
Sources & further reading
- Regulation (EU) 2024/1689 — full text (EUR-Lex) — Article 12: Automatic recording of events (logging)
- AGLedger EU AI Act compliance mapping — 11-article breakdown with provides/owns distinction
- European Commission — AI Act regulatory framework
- AI Act Explorer — Article 12
- The Artificial Intelligence Act (community resource)