Vault Signing Key Lifecycle and DR
Manage Ed25519 signing keys, rotate without downtime, verify audit trails offline, and recover from disasters.
Every audit vault entry is signed with an Ed25519 key. The signing key registry tracks all keys — active and retired — so historical entries remain verifiable after rotation. This guide covers viewing keys, rotating them, exporting audit trails for offline verification, running vault integrity scans, and disaster recovery procedures.
Prerequisites
- AGLedger running on your infrastructure (v0.13.3 or later)
- A platform key (
ach_pla_...) withadmin:systemscope VAULT_SIGNING_KEYconfigured in your environment (set during initial deployment)
All endpoints in this guide require platform-level authentication. Enterprise and agent keys are rejected with 403.
How It Works
AGLedger stores public keys in a vault_signing_keys database table. Private keys remain in environment variables and never leave the server process.
On startup, AGLedger reads VAULT_SIGNING_KEY from the environment, derives the Ed25519 public key, and registers it in the database via initSigningKeyRegistry(). If the key already exists, it is a no-op.
Each key has two possible states:
| Status | Meaning |
|--------|---------|
| active | Currently used to sign new vault entries |
| retired | Previously active. Never deleted, never reactivated. Historical entries signed by this key remain valid. |
When AGLedger verifies a vault chain, it resolves each entry's signingKeyId through a three-layer lookup: in-memory cache, then the database registry, then env-var fallback (VAULT_SIGNING_KEY + VAULT_SIGNING_KEY_PREVIOUS). This means the system can verify entries signed by any key that was ever active, even after many rotations.
Cross-replica cache invalidation uses PostgreSQL LISTEN/NOTIFY, so all instances pick up key changes within seconds.
Viewing Current Keys
List all signing keys in the registry:
PLATFORM_KEY="ach_pla_your_platform_key"
API_BASE="https://agledger.your-company.com"
curl -s "$API_BASE/v1/admin/vault/signing-keys" \
-H "Authorization: Bearer $PLATFORM_KEY"
Response:
{
"data": [
{
"keyId": "a1b2c3d4e5f60718",
"algorithm": "Ed25519",
"status": "active",
"activatedAt": "2026-03-15T10:30:00.000Z",
"retiredAt": null
}
],
"total": 1,
"nextCursor": null,
"hasMore": false
}
Each key has:
| Field | Description |
|-------|-------------|
| keyId | SHA-256 fingerprint of the public key (16 hex characters) |
| algorithm | Always Ed25519 |
| status | active or retired |
| activatedAt | When this key was first registered |
| retiredAt | When this key was replaced (null if active) |
There is always exactly one active key. After rotations, retired keys accumulate — they are never removed.
Key Rotation Runbook
Rotation replaces the active signing key. New vault entries are signed with the new key. Old entries keep their original signatures and remain verifiable.
Step 1: Generate a new Ed25519 keypair
Use the bundled key generation script:
docker compose exec agledger-api \
/nodejs/bin/node dist/scripts/generate-signing-key.js
This outputs a line like VAULT_SIGNING_KEY=<base64-encoded-key>. Save the new key value.
Alternatively, generate with OpenSSL:
openssl genpkey -algorithm ed25519 -outform DER 2>/dev/null | openssl base64 -A
Step 2: Update environment variables
Set the old key as VAULT_SIGNING_KEY_PREVIOUS and the new key as VAULT_SIGNING_KEY:
# In your .env file or secrets manager:
VAULT_SIGNING_KEY_PREVIOUS=<old-key-value>
VAULT_SIGNING_KEY=<new-key-value>
The VAULT_SIGNING_KEY_PREVIOUS env var acts as a two-key bridge during rotation. It ensures the system can verify entries signed by the immediately preceding key even before the database registry is consulted.
Step 3: Activate the new key
You have two options:
Option A: Restart (simpler). Restart the service. On startup, AGLedger auto-registers the new key and retires the old one:
# Docker Compose
docker compose restart agledger-api agledger-worker
# Kubernetes
kubectl rollout restart deployment/agledger-api deployment/agledger-worker
Option B: Zero-downtime API rotation (no restart). If the service is already running with the new VAULT_SIGNING_KEY env var injected (e.g., via Kubernetes secret refresh), call the rotation endpoint:
curl -s -X POST "$API_BASE/v1/admin/vault/signing-keys/rotate" \
-H "Authorization: Bearer $PLATFORM_KEY" \
-H "Content-Type: application/json" \
-d '{}'
Response when rotation succeeds:
{
"previousKeyId": "a1b2c3d4e5f60718",
"newKeyId": "f8e7d6c5b4a39201",
"status": "rotated",
"nextSteps": [
{
"action": "Verify signing keys",
"method": "GET",
"href": "/v1/admin/vault/signing-keys"
}
]
}
If the env var has not changed, the response indicates no action was taken:
{
"previousKeyId": null,
"newKeyId": "a1b2c3d4e5f60718",
"status": "already_active",
"nextSteps": [
{
"action": "Verify signing keys",
"method": "GET",
"href": "/v1/admin/vault/signing-keys"
}
]
}
Step 4: Verify the rotation
curl -s "$API_BASE/v1/admin/vault/signing-keys" \
-H "Authorization: Bearer $PLATFORM_KEY"
Confirm: old key shows status: "retired" with a retiredAt timestamp. New key shows status: "active".
Step 5: Clean up the previous key env var
After confirming rotation, VAULT_SIGNING_KEY_PREVIOUS can be removed from your environment. The retired key remains in the database registry and will be used for historical verification.
# Remove from .env or secrets manager:
# VAULT_SIGNING_KEY_PREVIOUS= (delete this line)
Offline Verification with Audit Exports
Audit exports include everything needed to verify the chain offline — entry hashes, signatures, and all public keys that signed any entry in the chain.
Export an audit trail:
ENTERPRISE_KEY="ach_ent_your_enterprise_key"
MANDATE_ID="019d3b11-60c4-..."
curl -s "$API_BASE/v1/mandates/$MANDATE_ID/audit-export" \
-H "Authorization: Bearer $ENTERPRISE_KEY"
Response (abbreviated):
{
"exportMetadata": {
"mandateId": "019d3b11-60c4-...",
"enterpriseId": "019d3b11-...",
"contractType": "ACH-PROC-v1",
"operatingMode": "cleartext",
"exportDate": "2026-04-09T14:00:00.000Z",
"totalEntries": 5,
"chainIntegrity": true,
"exportFormatVersion": "1.0",
"canonicalization": "RFC8785",
"signingPublicKey": "MCowBQYDK2Vw...",
"signingPublicKeys": {
"a1b2c3d4e5f60718": "MCowBQYDK2Vw...",
"f8e7d6c5b4a39201": "MCowBQYDK2Vw..."
}
},
"entries": [
{
"position": 1,
"timestamp": "2026-04-09T12:00:00.000Z",
"entryType": "STATE_TRANSITION",
"description": "Mandate created",
"payload": { "...": "..." },
"integrity": {
"payloadHash": "a3f2b1c4d5e6...",
"hashAlg": "SHA-256",
"previousHash": null,
"signature": "e4d3c2b1a0f9...",
"signatureAlg": "Ed25519",
"signingKeyId": "a1b2c3d4e5f60718",
"valid": true
}
},
{
"position": 2,
"timestamp": "2026-04-09T12:01:00.000Z",
"entryType": "STATE_TRANSITION",
"description": "Mandate registered",
"payload": { "...": "..." },
"integrity": {
"payloadHash": "b4c5d6e7f8a9...",
"hashAlg": "SHA-256",
"previousHash": "a3f2b1c4d5e6...",
"signature": "d3c2b1a0f9e8...",
"signatureAlg": "Ed25519",
"signingKeyId": "a1b2c3d4e5f60718",
"valid": true
}
}
]
}
Key fields for offline verification:
| Field | Purpose |
|-------|---------|
| exportMetadata.signingPublicKeys | Map of keyId to base64-encoded Ed25519 public key. Contains every key that signed any entry in this export. |
| integrity.payloadHash | SHA-256 hash of the RFC 8785 (JCS) canonicalized payload |
| integrity.previousHash | Hash of the preceding entry (null for position 1). Forms the hash chain. |
| integrity.signature | Ed25519 signature over the payload hash (hex-encoded, 128 chars) |
| integrity.signingKeyId | Which key signed this entry. Look up the public key in signingPublicKeys. |
| integrity.valid | Server-side verification result at export time |
| exportMetadata.chainIntegrity | true if all previousHash links are consistent |
To verify offline: for each entry, canonicalize the payload with RFC 8785, compute SHA-256, confirm it matches payloadHash, then verify the Ed25519 signature using the public key from signingPublicKeys[signingKeyId]. Confirm each entry's previousHash equals the prior entry's payloadHash.
Export formats: JSON (default), CSV, and NDJSON are available via the format query parameter.
# CSV export
curl -s "$API_BASE/v1/mandates/$MANDATE_ID/audit-export?format=csv" \
-H "Authorization: Bearer $ENTERPRISE_KEY"
# NDJSON export (one JSON object per line)
curl -s "$API_BASE/v1/mandates/$MANDATE_ID/audit-export?format=ndjson" \
-H "Authorization: Bearer $ENTERPRISE_KEY"
Retain audit exports as an offline backup. If the database is lost, these exports are your independent proof that the audit trail existed and was intact.
Vault Integrity Scanning
A vault scan walks every mandate's hash chain in the database and verifies signatures. Use it after a database restore, during incident response, or as a periodic health check.
Start a scan
curl -s -X POST "$API_BASE/v1/admin/vault/scan" \
-H "Authorization: Bearer $PLATFORM_KEY" \
-H "Content-Type: application/json" \
-d '{}'
Response (202):
{
"jobId": "019d4c22-7a8b-...",
"status": "queued",
"nextSteps": [
{
"action": "Check scan status",
"method": "GET",
"href": "/v1/admin/vault/scan/019d4c22-7a8b-..."
}
]
}
To scan specific mandates instead of the entire vault:
curl -s -X POST "$API_BASE/v1/admin/vault/scan" \
-H "Authorization: Bearer $PLATFORM_KEY" \
-H "Content-Type: application/json" \
-d '{
"mandateIds": [
"019d3b11-60c4-...",
"019d3b11-70d5-..."
]
}'
The mandateIds array accepts up to 1,000 UUIDs.
Poll for results
JOB_ID="019d4c22-7a8b-..."
curl -s "$API_BASE/v1/admin/vault/scan/$JOB_ID" \
-H "Authorization: Bearer $PLATFORM_KEY"
Response when complete:
{
"jobId": "019d4c22-7a8b-...",
"state": "completed",
"startedAt": "2026-04-09T14:00:01.000Z",
"completedAt": "2026-04-09T14:00:12.000Z",
"result": {
"total": 1250,
"verified": 1250,
"broken": 0,
"signatureErrors": 0,
"brokenMandates": [],
"errors": []
}
}
Result fields:
| Field | Meaning |
|-------|---------|
| total | Number of vault chains scanned |
| verified | Chains where every hash link and signature is valid |
| broken | Chains with at least one broken hash link |
| signatureErrors | Entries with invalid Ed25519 signatures |
| brokenMandates | Array of { mandateId, brokenAt } indicating the chain position where the break was detected |
| errors | Any processing errors encountered during the scan |
Possible state values: created, active, completed, failed, expired.
A clean scan (broken: 0, signatureErrors: 0) confirms vault integrity across the entire database.
External Checkpoint Anchoring
When VAULT_ANCHOR_ENABLED=true and S3/MinIO credentials are configured, AGLedger uploads signed checkpoint digests to object storage after each 6-hour checkpoint cycle. These external anchors use COMPLIANCE mode object lock, making them immutable — even if the database is compromised, the anchors survive.
A daily verification job automatically compares database checkpoints against their S3 anchors and logs discrepancies.
List anchored checkpoints
curl -s "$API_BASE/v1/admin/vault/anchors?mandateId=$MANDATE_ID" \
-H "Authorization: Bearer $PLATFORM_KEY"
Response:
{
"data": [
{
"key": "checkpoints/019d3b11-60c4-.../pos-42.json",
"lastModified": "2026-04-09T06:00:00.000Z",
"size": 1024
}
],
"total": 1,
"nextCursor": null,
"hasMore": false
}
The mandateId query parameter is required.
Verify anchors against the database
curl -s -X POST "$API_BASE/v1/admin/vault/anchors/verify" \
-H "Authorization: Bearer $PLATFORM_KEY" \
-H "Content-Type: application/json" \
-d '{ "mandateId": "019d3b11-60c4-..." }'
Response:
{
"data": [
{
"mandateId": "019d3b11-60c4-...",
"chainPosition": 42,
"match": true,
"detail": "DB checkpoint matches S3 anchor"
}
],
"nextSteps": [
{
"action": "List all anchors",
"method": "GET",
"href": "/v1/admin/vault/anchors?mandateId=019d3b11-60c4-..."
}
]
}
To verify a specific chain position:
curl -s -X POST "$API_BASE/v1/admin/vault/anchors/verify" \
-H "Authorization: Bearer $PLATFORM_KEY" \
-H "Content-Type: application/json" \
-d '{
"mandateId": "019d3b11-60c4-...",
"chainPosition": 42
}'
If anchoring is not enabled, the detail field indicates "Anchoring not enabled" or "No checkpoint found".
Disaster Recovery
What to back up
| Asset | Location | Priority |
|-------|----------|----------|
| PostgreSQL database | Your DB host | Critical — contains all vault chains, key registry, mandate state |
| VAULT_SIGNING_KEY env var | Secrets manager / .env | Critical — the current private signing key |
| VAULT_SIGNING_KEY_PREVIOUS env var | Secrets manager / .env | Important during rotation windows |
| Audit exports | Your document store | Recommended — independent offline verification |
| S3 checkpoint anchors | Object storage | Optional but valuable — external trust root |
Use standard PostgreSQL backup tools: pg_dump for logical backups, RDS/Aurora automated snapshots for continuous protection, or WAL archiving for near-zero RPO. The deploy/scripts/backup.sh script wraps pg_dump with recommended flags.
If the database is lost without backup
The signing key registry lives in the database. If the database is lost:
- Current + previous keys are recoverable from
VAULT_SIGNING_KEYandVAULT_SIGNING_KEY_PREVIOUSenv vars. On restart, they are re-registered automatically. - Keys older than N-1 are lost. If you rotated more than once, keys older than the previous one exist only in the database. Entries signed by those keys cannot have their signatures verified by the running instance.
- Audit exports remain verifiable. Exported audit trails include
signingPublicKeys— a complete map of all public keys that signed entries in that export. These exports are self-contained proof, independent of the database.
This is why retaining audit exports is critical. They are your offline verification backup.
Post-restore procedure
After restoring from a database backup or performing point-in-time recovery:
1. Run a vault integrity scan.
curl -s -X POST "$API_BASE/v1/admin/vault/scan" \
-H "Authorization: Bearer $PLATFORM_KEY" \
-H "Content-Type: application/json" \
-d '{}'
# Poll for results
curl -s "$API_BASE/v1/admin/vault/scan/$JOB_ID" \
-H "Authorization: Bearer $PLATFORM_KEY"
After a clean point-in-time recovery, all chains should be intact up to the restore point. PostgreSQL's transactional consistency guarantees that partial vault entries cannot exist.
2. Verify external anchors (if enabled).
Anchors uploaded after the restore point may reference entries that no longer exist in the database. The daily verification job logs these as warnings — they do not block operations.
curl -s -X POST "$API_BASE/v1/admin/vault/anchors/verify" \
-H "Authorization: Bearer $PLATFORM_KEY" \
-H "Content-Type: application/json" \
-d '{ "mandateId": "019d3b11-60c4-..." }'
3. Restart all instances.
Clear in-memory caches and re-initialize the signing key registry:
# Docker Compose
docker compose restart agledger-api agledger-worker
# Kubernetes
kubectl rollout restart deployment/agledger-api deployment/agledger-worker
4. Confirm signing key state.
curl -s "$API_BASE/v1/admin/vault/signing-keys" \
-H "Authorization: Bearer $PLATFORM_KEY"
Verify there is exactly one active key and it matches your current VAULT_SIGNING_KEY.
What is preserved after restore
- Vault hash chain integrity (transactionally consistent with the database snapshot)
- Signing key registry (all keys registered before the restore point)
- Mandate state machine (all transitions are single-transaction)
- API keys and enterprise configuration
What may be lost after restore
- Vault entries written after the restore point
- Signing keys registered after the restore point (re-registered on restart from env vars)
- Webhook deliveries in flight at the time of failure
- pg-boss jobs that were active (reclaimed automatically on restart)
Validated: 48 assertions covering signing key registry, checkpoint anchoring, algorithm agility, and cross-feature validation. View test source.