Frequently Asked Questions

How AOAP™ works, what the governance gateway does, and why enterprise AI accountability matters.

Yes. Three deployment models: (1) Standalone Appliance — deploy within your infrastructure, full control, your data stays in your environment, we see nothing. (2) Federated — connect your deployment to peer enterprises for cross-org delegation chain visibility with privacy boundaries. (3) Hosted Service — use our managed service, zero infrastructure.

For internal deployments: we see nothing, you see everything — it’s your infrastructure and your data. For cross-org federation: payloads are encrypted between organizations. We see chain structure (who delegated to whom, timestamps, state transitions) but do not have access to authorization details, activity evidence, or operational specifics. Each organization controls their own encryption keys — we do not hold them.

We’re designed against it. Your audit history, agent identities, and decision records are designed to be portable. AOAP™ is a documented format — not a lock-in mechanism. Deploy as a standalone appliance, federate with our network, or leave entirely. We earn your business through platform quality, not by trapping your data.

The EU AI Act (enforcement begins August 2026) sets specific requirements for high-risk AI systems. Article 12 (Record-keeping): high-risk systems must automatically record events for traceability — the permanent record provides hash-chained, append-only logs of every reported authorization, delegation, and outcome. Article 14 (Human oversight): systems must support human review — chain reconstruction traces any outcome to its root authorization. Article 9 (Risk management): providers must establish risk management processes — agent reputation and pattern detection surface operational trends. Article 26 (Deployer obligations): deployers must monitor operations — structured activity records provide the evidence base for compliance reporting. Deploy as a standalone appliance for full regulatory control of your audit data. Full compliance involves requirements beyond audit trails — consult qualified legal counsel for your specific obligations.

An AI agent audit trail is a structured, tamper-evident record of what autonomous AI agents were authorized to do, how they delegated work, and what outcomes they reported. Unlike application logs or observability traces, an audit trail preserves the full chain of custody: who authorized a task, which agents were involved, what constraints applied, and whether outcomes fell within scope. AGLedger™ produces hash-chained audit trails designed for internal review, regulatory compliance, and incident reconstruction.

AI agent governance tools help enterprises track, control, and audit autonomous AI agent activity. They answer questions like: what was each agent authorized to do? Who delegated what to whom? Did outcomes stay within scope? AGLedger™ is the governance gateway for agentic operations — providing structured audit trails, delegation chain reconstruction, agent reputation scoring, and incident resolution for CISOs, auditors, and platform teams managing fleets of autonomous agents across teams and systems.

Any autonomous decision, delegation, or outcome. When Agent A authorizes Agent B to perform a task, when B delegates a subtask to Agent C, when C records what it did — the Ledger preserves the full reported chain. This works across teams, systems, and agent frameworks. Marketing agents, infrastructure agents, analytics agents, customer service agents — if it makes decisions autonomously, its activity should be reported to the Ledger.

The Agentic Operations and Accountability Protocol™ (AOAP™) defines how to record agent authorizations, delegations, activity records, and outcomes. It’s the standardized highway for agentic accountability. The platform is what we build on top: the permanent record, chain reconstruction, agent reputation, incident resolution, and the partner plugin ecosystem.

No. The Governance Sidecar™ sits between your agents and their tool servers — no code changes required. Start in observe mode for silent recording. Graduate to advisory mode when you want agents to receive accountability context. When you’re ready for full lifecycle management, the backend MCP Server provides direct authorization tools.

Yes — this is the core use case. One API call reconstructs the reported decision chain: who authorized the original task, which agents were delegated subtasks, what constraints applied at each level, what evidence was recorded, and what was decided. Cross-team, cross-system, on demand. No log aggregation, no guesswork.

When Agent A delegates a subtask to Agent B, the delegation is recorded with parent-child linkage, inherited constraints, and scope boundaries. Each subsequent delegation extends the chain. AGLedger™ preserves this full delegation chain so that any outcome can be traced back through every handoff to the original authorization. Constraints inherit down the chain — a child delegation cannot exceed its parent’s scope.

Yes. AOAP™ is framework-agnostic. If your framework can make HTTP calls, it can record activity to the Ledger via our REST API. The protocol is designed so that different frameworks produce interoperable audit trails.

Observability tools answer “how did my agent perform?” — they trace LLM calls, measure latency, and evaluate output quality. AGLedger™ answers a different question: “what were my agents authorized to do, and did they stay within scope?” We track authorizations, delegation chains, and outcomes over time — not token counts or prompt traces. Key differences: (1) AGLedger runs in your environment — no per-event ingestion fees, you own the database. (2) We’re framework-independent with no platform lock-in. (3) We provide Dual Trail Cross-Validation™ — comparing declared intent against observed behavior. (4) We’re built for CISOs and auditors, not developers debugging prompts. They’re dashboards. We’re the ledger. The tools are complementary — use both.

Every recorded outcome — scope compliance, delegation success, requester acceptance — contributes to a persistent score for each agent. Unlike one-time benchmarks, reputation reflects continuous real-world performance. When an agent that’s been at 95% suddenly drops to 60%, you see the shift before anyone files a ticket. Scores are computed per contract type, include statistical confidence levels, and update automatically. AGLedger™ makes it possible to detect sudden drops, persistent outliers, and gradual drift before they compound into incidents.

The Governance Sidecar™ is an MCP proxy that sits between AI agents and their tool servers, recording every tool call without modifying agent code. It detects authorization-worthy patterns across nine contract types using configurable rules, and witnesses that Spec-using agents’ actions match their declarations. It also reveals The Silence — agents acting without AOAP™ mandates. Three modes: observe (silent recording, zero token overhead), advisory (adds accountability annotations), and enforced (blocks unauthorized tool calls). Sub-millisecond proxy overhead. Provider-agnostic — tested across Claude, Gemini, and GPT.

Dual Trail Cross-Validation™ is AGLedger’s method for automatically detecting discrepancies between what AI agents declared they would do (via AOAP™) and what they actually did (observed by the Governance Sidecar™). When both trails exist, the platform cross-references them to catch undeclared actions, type mismatches, scope creep, and unfulfilled authorizations. Tested across Claude, Gemini, and GPT providers with identical detection results.

Yes. Nine contract types ship out of the box — procurement, transactions, data queries, delivery, orchestration, communication, authorization, infrastructure, and destructive operations. The schema is fully customizable: modify existing types to fit your organization’s policies, or create entirely new ones. Adding a contract type is configuration, not code.

Yes. A multi-agent testbed validates your contract types, detection rules, and enforcement policies against live LLM agents across multiple providers (Claude, Gemini, GPT). Run real agent scenarios — delegation chains, incident detection, adversarial evasion, concurrent operations — and confirm your policies catch what you expect and don’t block what they shouldn’t. Includes precision/recall metrics per contract type and per-call latency benchmarks.

The audit trail persists independently of the agents that created it. When an agent is retired, replaced, or upgraded, its full history of authorizations, delegations, and outcomes remains in the permanent record. New agents inherit the organizational context. The record outlives every agent.

A2A defines how agents communicate with each other. MCP defines how agents access tools and context. AGLedger™ provides the accountability layer on top: what agents were authorized to do, how work was delegated between them, and what they reported back. A2A lets agents talk. MCP lets them use tools. We track what they committed to — with minimal instrumentation required.

The Compliance Export delivers the audit trail in formats designed for auditors, regulators, and GRC stacks. It pulls data from the Permanent Record — hash-chained authorizations, delegation chains, activity records, and recorded outcomes — and formats it for regulatory reporting. Designed to support EU AI Act Article 12 record-keeping requirements.

AGLedger™ uses enterprise licensing. Pricing depends on deployment model (standalone appliance, federated, or hosted service), scale, and support requirements. Contact us for a quote tailored to your use case.

No. AGLedger™ is proprietary, commercially licensed software. All repositories are private. The SDK is publicly visible on GitHub but licensed under a proprietary license — not an open-source license. There is no free tier, no open-source edition, and no community license.

The Governance Sidecar™ can be deployed in minutes — it’s a single MCP proxy that sits between your agents and their tool servers. No agent code changes. Start in observe mode immediately. Full platform deployment depends on your infrastructure, but standalone appliance deployments are designed for rapid setup.

The Silence refers to agents acting without AOAP™ mandates — doing things nobody asked them to do. When agents make tool calls that have no corresponding authorization, those actions exist in silence: unreported, unrecorded, invisible to the audit trail. The Governance Sidecar™ reveals The Silence by observing every tool call and detecting which ones lack AOAP™ mandates. Eliminating The Silence is the goal of Phase 2 of the Trust Ladder.

The Trust Ladder is AGLedger’s three-phase adoption path. Phase 1 (Trust Baseline): add the AOAP™ compliance snippet to agent system instructions — zero infrastructure, instant visibility. Phase 2 (Observability Net): deploy the Governance Sidecar™ in your VPC for Dual Trail Cross-Validation™ and full visibility into The Silence. Phase 3 (Enforcement Kernel): switch to active enforcement — block unauthorized tool calls, trigger settlement signals, achieve 100% AOAP™ compliance. Each phase builds on the previous one. Move at your own pace.

Fiddler AI monitors agent behavior in real-time (dashboards). Zenity governs what agents can do (permissions). Langfuse provides LLM observability for developers (tracing). AGLedger™ records what agents committed to do and whether they did it (the ledger). Key differences: we provide structured accountability records with settlement signals, cross-platform delegation chain tracking, air-gapped on-prem deployment, tolerance-band verification against pre-agreed criteria, and a hash-chained audit vault with Ed25519 signatures. Observability tools and security platforms are complementary — AGLedger™ is the accountability layer they feed into.

No. The Agentic Operations and Accountability Protocol™ (AOAP™) is proprietary to AGLedger LLC. It is not an open standard, not open source, and not freely available. The protocol may be made available under separate license terms in the future, but today it is commercially licensed as part of the AGLedger™ platform. The SDK is publicly visible on GitHub for integration purposes but carries a proprietary license.

The Agentic AI Foundation (AAIF) is a Linux Foundation project that now houses MCP, A2A, AGENTS.md, and goose — with 146+ members including AWS, Anthropic, Google, Microsoft, OpenAI, JPMorgan Chase, and others. AAIF covers agent communication, tool access, and discovery. There is no accountability specification in the AAIF stack. AGLedger™ fills that gap: we provide the structured accountability records, delegation chain tracking, and audit trails that the AAIF protocols don’t address.

Still have questions?

Dive deeper into the spec, the platform, or the API.