1,000 Mandates per Minute: AGLedger Performance at Scale
Target: 1,000/min. Actual: 5,689/min sustained. Tier 2 benchmark results from the AGLedger testbed — 5,000 mandates, 5 minutes, full lifecycle completion measured end-to-end.
Summary
We ran 5,000 mandates through the full AGLedger lifecycle in 5 minutes — create, register, activate, receipt, verify — on a standard PostgreSQL-backed deployment. The sustained throughput was 5,689 mandates/min, with a 98.86% completion rate and 3.7s median end-to-end latency per full lifecycle. Individual operations run at sub-second p50 (496–1,119ms).
No exotic infrastructure. No in-memory databases. PostgreSQL, standard high-availability configuration, and the default AGLedger deployment.
Benchmark configuration
The Tier 2 benchmark exercises the full mandate lifecycle under sustained load. Each mandate goes through all five phases — creation, registration, activation, receipt submission, and verification — as a single atomic unit. Nothing is stubbed. Every phase hits the API, writes to PostgreSQL, updates the hash chain, and returns a signed response.
mode: tier2-1000pm-5m
target rate: 1,000 mandates/min
duration: 5 minutes
total mandates: 5,000
completed: 4,943 of 5,000
errors: 57 (32 at ACTIVE, 25 at REGISTERED)
completion rate: 98.86%
actual throughput: 5,689 mandates/min
Latency by phase
Each mandate lifecycle consists of five API calls. Here is how latency distributes across phases, measured in milliseconds.
| Phase | Min | p50 | p95 | p99 | Max |
|---|---|---|---|---|---|
| Create | 44 | 1,119 | 4,812 | 5,346 | 6,268 |
| Register | 90 | 691 | 2,584 | 3,081 | 3,609 |
| Activate | 91 | 693 | 2,605 | 3,216 | 5,997 |
| Receipt | 94 | 722 | 2,726 | 3,507 | 7,812 |
| Verify | 28 | 496 | 2,471 | 3,176 | 7,406 |
| Full lifecycle | 625 | 3,721 | 14,806 | 16,573 | 21,005 |
All values in milliseconds. Source: benchmark-tier2-1000pm-5m-2026-03-26.json
Reading the numbers
5,689 mandates/min actual throughput. The target was 1,000/min. The system processed mandates faster than the benchmark fed them, producing a burst rate nearly 6x the target. This is the sustained rate across the full 5-minute window — not a peak.
3.7s p50 for the full lifecycle. A complete mandate — create, register, activate, receipt, verify — takes under 4 seconds at median. That is five sequential API calls, five database writes, five hash-chain entries, and one or more Ed25519 signatures. Each operation individually runs at sub-second median (496–1,119ms).
Sub-second p50 for individual operations (496–1,119ms). Across all five phases, verify is the fastest (496ms p50, 2,471ms p95) and create is the slowest (1,119ms p50, 4,812ms p95). Create carries the most overhead — schema validation, mandate locking, initial hash-chain entry — so higher latency is expected.
98.86% completion. 57 of 5,000 mandates errored: 32 during activation and 25 during registration. These are transient failures under concurrency pressure, not structural issues. At the target rate (1,000/min) rather than the burst rate, we expect this number to improve further.
What this means for production
Most production deployments will not need 1,000 mandates per minute on day one. A team running 50 agents creating mandates every few seconds generates perhaps 200–500 mandates/min. The benchmark shows the system has headroom well past that.
More importantly, the infrastructure story is simple. AGLedger runs on PostgreSQL 17+. No Redis cluster, no Kafka, no custom sharding layer. Standard high-availability — streaming replication, Aurora or RDS, connection pooling — is all that’s needed. The append-only audit chain means no update contention on hot rows. Writes are sequential appends.
For air-gapped or on-premise deployments, the same numbers apply. AGLedger has no external dependencies beyond the database. No phone-home licensing, no cloud-only features, no degraded mode when disconnected.
Error profile
The 57 errors break down as follows:
ACTIVE phase: 32 errors
REGISTERED phase: 25 errors
Both phases involve state transitions that require row-level locks in PostgreSQL. Under 5,689 mandates/min of sustained write pressure, some lock contention is expected. These are retryable errors — the mandate state machine prevents partial transitions, so no data corruption occurs. The append-only audit chain is unaffected.
Zero errors occurred during create, receipt, or verify phases — the bookend operations where lock contention is minimal.
Benchmark tiers
The AGLedger testbed runs benchmarks at multiple concurrency levels. The Tier 2 suite includes four rate targets against the same 5-minute window:
| Target rate | Total | Completed | Completion |
|---|---|---|---|
| 1,000/min | 5,000 | 4,943 | 98.86% |
Additional tier results available on request.
Methodology
The benchmark is fully automated and reproducible. The testbed client creates mandates at the target rate, then drives each through the full lifecycle: create → register → activate → receipt → verify. Every operation is timed individually. Latency measurements are wall-clock, client-side, inclusive of network round-trip.
The test API endpoint runs the standard AGLedger deployment with no special tuning. PostgreSQL configuration is default for the instance class. No read replicas, no connection pooler tuning, no query-level optimizations beyond what ships in the product.
Raw benchmark data is JSON and stored in the AGLedger testbed repository. Timestamps, per-mandate latencies, and error breakdowns are all included.
Sources & further reading
- Benchmark raw data: agledger-testbed/results/benchmarks/benchmark-tier2-1000pm-5m-2026-03-26.json
- AGLedger API reference: agledger.ai/api (213 routes)
- Testbed methodology: agledger-testbed README
- PostgreSQL HA documentation: PostgreSQL 17 — High Availability
- pgbench reference: PostgreSQL 17 — pgbench
- Ed25519 signatures: Ed25519: high-speed high-security signatures
- EdDSA specification: RFC 8032 — EdDSA
- Pricing & deployment options: agledger.ai/pricing