Back to blog

Why Audit Logs Aren't Proof

Alex Floyd||7 min read

Every incident report starts the same way: someone exports the audit log. The rows are there. The timestamps are there. Action performed, user ID, IP address. But when the security team asks “can you prove a specific person authorized this?” — the honest answer is usually no. Not really. Not in a way that survives a legal challenge or a determined internal investigation.

Audit logs are essential. Nobody is arguing you shouldn’t have them. The problem is that “audit log” and “proof” are not synonyms, and treating them as such creates a false sense of security that collapses at exactly the moment it matters most.

The database problem

An audit log is a database table. Every row in that table was written by software your organization controls. Which means anyone who controls that software — or the database directly — can modify, backdate, or delete those rows. Your DBA can do it. A compromised admin account can do it. A sufficiently sophisticated attacker can do it.

This is not theoretical. Database tampering is one of the oldest forms of financial fraud. “We have logs showing the approval happened” is a weak assertion if the logs live in the same system that performed the action. The log writer and the log reader are the same party. There is no independent witness.

Append-only log storage helps. Immutable audit trails in separate infrastructure help more. But none of these approaches address the deeper question: even if the log entry is genuine, does it prove that a specific authorized human made a conscious decision? Or does it just prove that a row was inserted at a certain timestamp?

What non-repudiation actually means

In cryptography, non-repudiation is the property that makes it impossible for someone to deny having performed an action. The mechanism is a digital signature: a mathematical artifact that can only be produced by someone who possesses a specific private key, and that anyone can verify using the corresponding public key.

If you sign a document with your private key, you cannot later claim you didn’t sign it — because only your key could have produced that signature, and the math is publicly verifiable. If someone else claims you signed something you didn’t, you can prove they’re lying by showing the signature doesn’t verify against your public key.

An audit log has no such property. A row saying “user_id: 4821 approved wire transfer at 14:32:07” is a claim made by the system. A digital signature over that same data is a claim made by the user, one that the system cannot forge even if it wanted to.

How Ed25519 works, in plain language

Ed25519 is an elliptic-curve digital signature algorithm. The name sounds intimidating but the concept is straightforward: you have two keys, a private key you keep secret and a public key you share freely. These keys are mathematically linked in a one-way relationship.

Signing

Take a message (in our case, the canonical approval payload). Run it through the Ed25519 algorithm with your private key. The output is a 64-byte signature. This is computationally fast — it takes microseconds. But the private key never leaves the signing system.

Verifying

Take the same message, the signature, and the public key. Run them through the verification algorithm. You get a boolean: valid or invalid. If valid, the signature was produced by whoever holds the corresponding private key. If invalid, either the message was tampered with or the signature is a forgery.

The math guarantee

Deriving the private key from the public key is computationally infeasible with current hardware — it would take longer than the age of the universe. Forging a valid signature without the private key is equally impossible. This is not a policy decision. It is mathematics.

What SignedApproval’s canonical payload proves

When a human approves an action through SignedApproval, the system constructs a canonical payload and signs it with an Ed25519 private key. The payload is deterministic — given the same inputs, you always get the same bytes, which means the signature is exactly reproducible.

Canonical payload (signed verbatim)
{
  "v": 1,
  "rid": "request-uuid",       // the approval request ID
  "did": "decision-uuid",      // the decision record ID
  "approver": "sha256(user_id)[:16]",  // hashed approver identity
  "action": "sha256(action_text)[:16]", // hashed action description
  "decision": "approved",
  "method": "passkey",         // authentication method used
  "ts": 1709000000,            // Unix timestamp of the decision
  "exp": 1709086400,           // expiry — signature is not valid after this
  "nonce": "random-hex-32"     // prevents replay attacks
}

Each field is load-bearing. The actionhash means you cannot claim a different action was approved — change one character in the original action text and verification fails. The method field records whether authentication was via passkey (hardware-bound, phishing-resistant), TOTP (possession-based), or biometric (Face ID). The nonce prevents an attacker from replaying an old approval against a new action.

Three scenarios where logs fail and signatures succeed

Scenario 1: The disgruntled employee

A financial controller at a mid-size company approves a $180K wire transfer to a vendor. Two months later, after being terminated, she claims she never approved it — that someone else used her credentials. The company has an audit log entry. She says it was forged. Without a cryptographic signature that only her authentication device could have produced, the dispute becomes a “he said / she said” requiring forensic investigation of server logs, session tokens, and IP addresses — all of which can be spoofed.

With a SignedApproval credential, the signature verifies against her passkey — a hardware-bound credential that cannot be copied or used remotely. The dispute is closed in seconds, not weeks.

Scenario 2: The vendor goes dark

You used a SaaS approval tool for three years. Their servers hosted your approval records. The company shuts down, is acquired, or suffers a breach. Your audit trail is gone, corrupted, or compromised. Two years later, an auditor asks for proof that a specific data access was authorized.

A SignedApproval credential is a self-contained cryptographic artifact. You store it alongside the action record. Verification requires only the public key — which you can obtain from our API or cache permanently. The credential remains valid whether or not our servers are running, forever.

Scenario 3: The AI Act audit

Under Article 14 of the EU AI Act, high-risk AI systems require meaningful human oversight. “Meaningful” means an informed, authenticated human made a real decision — not that a checkbox was ticked in a workflow tool. Your regulator asks: for each high-stakes AI decision in the past 12 months, can you prove a qualified human reviewed and authorized it?

An audit log says “approved_by: user_4821.” A SignedApproval credential says “this specific action was reviewed and approved by an authenticated human using a phishing-resistant passkey at this exact timestamp, and the signature is mathematically verifiable.” One of these satisfies a regulator.

How to verify a SignedApproval credential offline

Offline verification is one of the design requirements we were most deliberate about. You should not need to call our API to prove an approval was valid. Here is the complete verification flow in three lines of Python:

Python — offline verification
import json, base64
from cryptography.hazmat.primitives.asymmetric.ed25519 import Ed25519PublicKey

# Public key from GET /api/v1/approvals/{id}/verify (cache this permanently)
pub = Ed25519PublicKey.from_public_bytes(base64.b64decode(PUBLIC_KEY_B64))

# The credential you received from SignedApproval and stored with your record
payload_bytes = json.dumps(credential["payload"], sort_keys=True).encode()
sig_bytes = base64.b64decode(credential["signature"])

# Raises InvalidSignature if tampered with — otherwise silent success
pub.verify(sig_bytes, payload_bytes)

That is the entire verification. No network call. No API key. No dependency on our infrastructure. The only thing you need is the public key, which is a 32-byte value you can store anywhere. You can burn it into your application, put it in your compliance documentation, engrave it on a stone tablet if you prefer — it is mathematically linked to every approval credential ever produced by this key pair.

Logs are necessary. Signatures are sufficient.

This is not an argument to get rid of audit logs. Logs are extremely useful for operational visibility, debugging, and compliance monitoring. You should absolutely keep them.

The argument is that logs alone are not proof of authorization. A log entry is a claim made by a system. A digital signature is a claim made by a human, one that the system cannot fabricate even if compromised. The distinction matters when the stakes are high: regulatory investigations, legal disputes, incident post-mortems, and the increasingly common question of whether a specific AI agent action was genuinely authorized by a qualified human.

Audit logs tell the story. Ed25519 signatures prove it happened. For the actions that matter most — fund transfers, data access decisions, infrastructure changes, AI-initiated actions with real-world consequences — you want both. And when someone challenges the record, you want the signature. Start adding signatures to your approvals today.

SecurityComplianceEd25519

Related posts

Ready to add cryptographic approvals to your agents?

200 free approvals per month. No credit card required.

Get your API key