The governance gateway for enterprise AI agents
A tamper-evident record of what your agents were authorized to do, how they delegated, and what they reported. Reconstructable on demand.
The AGLedger™ platform provides hash-chained audit trails, delegation chain reconstruction, agent reputation scoring, and incident resolution — built for standalone appliance, federated, or hosted service deployment.
How It Fits Together
Four parts. One accountability system.
Agents use the Spec to declare what they're doing. The Sidecar validates those declarations — and catches agents that skip the Spec entirely. The compound effect is what makes AGLedger™ work.
The Standard
AOAP™
Agents use it to declare authorizations, record activity, and report outcomes. The native integration path.
The Observer & Validator
Governance Sidecar™
Validates Spec-using agents' actions match their declarations. Catches agents that skip the Spec entirely. Three modes: observe, advisory, and enforced.
The Engine
Permanent Record
Stores and chains it. Hash-chained audit vault, chain reconstruction, reputation.
The Output
Compliance Export
Delivers the audit trail. Formatted for auditors, regulators, and your GRC stack.
Agents declare through the Spec → the Sidecar validates and catches gaps → the Permanent Record chains it → the Compliance Export delivers the audit trail. Each layer feeds the next.
Deployment
Your infrastructure. Your audit data.
Most AI governance tools are SaaS-only — your sensitive operational data goes to someone else's cloud, and you pay per GB to access your own logs. AGLedger™ runs as a standalone service inside your perimeter. You own the database. Your audit trail never leaves your environment. No per-event fees, no ingestion costs, no data hostage.
Standalone Appliance
Your perimeter, your data, your database
Deploy inside your infrastructure. You own the database — no per-event ingestion fees, no per-GB log costs. Full API access, full visibility, no third-party data exposure.
Federated
Cross-org accountability with privacy boundaries
Cross-org delegation chain tracing where each organization keeps their own data. Payloads encrypted between parties. Each org controls their own keys.
Hosted Service
Zero infrastructure
Use our hosted service when data sovereignty isn’t a constraint. Full API access. No infrastructure to manage. We handle deployment and operations.
The Permanent Record
Agents are ephemeral. The record endures.
Hash-chained, append-only records of reported authorizations, delegations, and outcomes. When agents span multiple systems, the platform connects the chain. Crash-resilient: the local SQLite ledger survives process kills, power loss, and backend outages with zero data loss. Sync retries automatically when connectivity restores.
Designed to support EU AI Act Article 12 record-keeping requirements. See regulatory readiness below.
Dual Trail Cross-Validation™
What agents declared vs. what they actually did.
When agents use the Spec AND run through the Governance Sidecar™, two independent accountability trails are recorded simultaneously. The platform cross-references them automatically — discrepancies surface without manual review.
Signal
Aligned
Declared
Purchase 25 keyboards
Observed
PROC-v1 detected (search_suppliers, create_purchase_order)
Declared intent matches observed behavior.
Signal
Undeclared Action
Declared
Nothing declared
Observed
TXN-v1 detected (transfer_funds)
Agent acted without declaring intent. The Sidecar caught it.
Signal
Type Mismatch
Declared
Research vendors (DATA)
Observed
TXN-v1 detected (transfer_funds)
Agent declared data work but executed a financial transfer.
Signal
Scope Creep
Declared
Data analysis (DATA)
Observed
DATA-v1 + PROC-v1 detected
Agent completed declared work, then did undeclared procurement.
Tested across providers
Dual Trail Cross-Validation™ produces identical detection results across Claude, Gemini, and GPT — same contract types, same confidence levels, same divergence signals. The accountability layer is provider-agnostic.
Reconstruction
Any outcome. Full chain. On demand.
One query reconstructs the full delegation path: who authorized what, which agents were delegated, what constraints applied, what was reported back, and how the outcome was decided. Cross-team, cross-system, cross-agent.
Cross-Team
Trace delegations that span marketing, engineering, operations, and infrastructure agents.
Cross-System
Designed to work with LangChain, AutoGen, CrewAI, and custom agent frameworks.
On Demand
Full chain reconstruction in a single API call. No log aggregation. No guesswork.
Integration
Observe first. Integrate when you're ready.
Agents should use the Spec natively — declaring authorizations, recording activity, and reporting outcomes directly. The Governance Sidecar™ is an MCP proxy that validates those declarations against actual tool calls, and catches agents that skip the Spec entirely by detecting authorization-worthy patterns. Start with passive observation and formalize accountability at your own pace.
Step 1
Deploy the Governance Sidecar™
MCP proxy using a four-phase interceptor pipeline. Records every tool call (arguments, results, duration, errors) to a local SQLite ledger. For agents using the Spec, validates that actions match declarations. For agents that don't, detects authorization-worthy patterns across nine contract types (procurement, transactions, data, delivery, orchestration, communication, authorization, infrastructure, and destructive operations). No changes to your agent code.
Step 2
Add accountability annotations
Switch to advisory mode. Tool responses include structured accountability context — ~20 tokens overhead per annotated match; unmatched calls pass through untouched. Five companion MCP tools let agents query shadow authorizations, generate reports, formalize patterns, or dismiss false positives.
Step 3
Enforce authorization before execution
In enforced mode, tool calls without an active authorization are blocked. Agents must declare intent before acting. LLMs naturally self-correct when calls are rejected — tested across Claude, Gemini, and GPT providers. Configurable grace windows prevent bootstrap deadlock.
Step 4
Full authorization lifecycle via MCP
Nine operations exposed as standard MCP tools: create authorizations, transition states, submit activity records, report outcomes, query reputation, reconstruct delegation chains, and manage incidents. Direct lifecycle management. Designed to work with any MCP client.
What Gets Recorded
Every tool call is recorded — not just pattern matches. Arguments, results, duration, and errors for every call. Built-in detection rules across nine contract types: procurement, transactions, data queries, delivery, orchestration, communication, authorization, infrastructure, and destructive operations. Each detected pattern includes a confidence score. Custom rules extensible via JSON configuration.
Confidence Tuning
Enterprises tune detection sensitivity to their risk tolerance. High precision for financial transactions, maximum recall for data access. Three levels: low (maximum recall, includes argument-only matches), medium (balanced, drops ambiguous signals), high (maximum precision, strong tool-name matches only). Configurable per deployment.
Agent Identity & Coverage
Each proxy instance tracks a per-proxy agent name and ID, discovers upstream tool catalogs, and maintains session boundaries. Coverage metrics show which tools are being called and how often. The offline-capable SQLite ledger is designed to sync to the backend when connected.
Agent Reputation
Every outcome shapes the reputation. The trend tells the story.
Every recorded outcome — scope compliance, delegation success, requester acceptance — contributes to a persistent reputation for each agent. Not a one-time benchmark. A living score built from real operational outcomes as they accumulate.
Sudden Drops
An agent at 95% satisfaction suddenly scoring 60%. You see the shift in the trend — before anyone files a ticket.
Outliers
One agent stuck at 40% when everything else is at 90%. The outlier is obvious. The question is why — and the chain tells you.
Gradual Drift
Slow degradation over weeks is invisible in logs. Reputation trends make it visible before it compounds.
Privacy
Your records, your access. Privacy for federation.
Internal deployments give you direct access to all recorded activity — it's your data. When you federate across organizations, payloads are encrypted between parties. Each org sees their own data. The platform sees the chain structure.
Internal deployment
- Direct access to all recorded activity
- Decision chain reconstruction across reported activity
- Your infrastructure, your data
- No encryption barriers for your own teams
Cross-org federation
- Payloads encrypted between organizations
- Chain structure visible, content private
- Each org controls their own keys
- Partner plugins for selective sharing
Regulatory Readiness
Designed for EU AI Act compliance
The EU AI Act takes effect August 2026. High-risk AI systems face specific logging, oversight, and risk management requirements. Here's how the platform maps to them.
12
Record-keeping
High-risk systems must automatically record events for traceability. The permanent record provides hash-chained, append-only logs of every reported authorization, delegation, and outcome.
14
Human oversight
Systems must support human review and intervention. Chain reconstruction traces any outcome back through its full delegation chain on demand.
9
Risk management
Providers must establish ongoing risk management. Agent reputation and pattern detection surface operational trends across your agent fleet over time.
26
Deployer obligations
Deployers must monitor system operation and report incidents. Structured activity records and recorded outcomes provide the evidence base for compliance reporting.
Not sure which obligations apply to your AI systems? The Future of Life Institute's EU AI Act Compliance Checker walks you through a step-by-step questionnaire for each system.
Check Your ObligationsFull EU AI Act compliance involves requirements beyond audit trails. Organizations should consult qualified legal counsel for their specific obligations.
Platform Features
What the platform adds
Built on the spec. Powered by operational data.
Incident Resolution
When something goes wrong, reconstruct the full chain. Evidence review and structured escalation. We record the outcome, not the deliberation.
Partner Plugins
Our architecture supports opt-in partner plugins for compliance auditors, risk assessors, and industry validators. Organizations choose which partners receive access. We do not request or require access to your encrypted payloads.
Agent Reputation
Reputation derived from outcome patterns. Reliability, accuracy, responsiveness — with confidence levels from real operational volume.
Want the full technical picture?
Delegation chains, activity recording, MCP integration, protocol complements, and infrastructure.
Architecture Deep DiveWhat you get
- ✓AOAP™
- ✓Audit engine
- ✓SDK & client libraries
- ✓Schema definitions
Our competitive edge
- ⬤Runs inside your perimeter — audit data never leaves
- ⬤Assembled, tested accountability system
- ⬤Independent — no framework lock-in, no platform bias
- ⬤Cross-system chain reconstruction
- ⬤Protected by multiple pending U.S. patent applications
Dashboards vs. the Ledger
Observability tools (Fiddler AI, Langfuse, AgentOps) watch what agents do. Security platforms (Zenity) govern what agents can do. GRC incumbents (OneTrust) manage policy across all AI. AGLedger™ records what agents committed to do and whether they did it. They're dashboards. We're the ledger.
AAIF: The opening
The Agentic AI Foundation (Linux Foundation) now houses MCP, A2A, AGENTS.md, and goose — with 146+ members including AWS, Anthropic, Google, Microsoft, and OpenAI. There is no accountability specification in the AAIF stack. That's our opening.
Ready to talk?
We're working with early partners on standalone, federated, and hosted service deployments. Tell us about your use case.