Version: 0.1.0 · Status: Review Candidate
10-Minute Review Path

Validate the system in under ten minutes without reading full docs. Each button calls the real API route and renders the actual result inline.

  1. 1. Read definition (30 seconds) go to definition →
  2. 2. Run "AI attempts action" → expect REJECT
  3. 3. Run "Missing approval" → expect HOLD
  4. 4. Run "Approved action" → expect ALLOW
  5. 5.Confirm:
    • execution blocked when required
    • execution allowed only with approval
    • outcomes are deterministic
Open the one-page System Brief →

CerbaSeal — External Reviewer Portal

A structured review surface for technical evaluators, security reviewers, and pilot decision-makers.

CerbaSeal is a deterministic execution enforcement spine for AI-assisted workflows. It sits between a decision system and an execution system, returning ALLOW, HOLD, or REJECT with a hash-linked evidence trail.
What problem it solves

AI systems can propose consequential actions. They should not be able to authorize them.

As AI systems move from generating text to proposing actions — in workflows, transactions, accounts, and operations — most governance tools observe and log after the fact. CerbaSeal enforces the execution boundary before the action occurs.

Every proposed action passes through a deterministic gate. The gate checks policy reference, provenance, logging readiness, authority class, approval requirements, control state, and trust state. The result is a structured decision envelope — ALLOW, HOLD, or REJECT — with a hash-linked evidence trail that can be replayed, exported, and audited.

CerbaSeal enforces authority. It does not determine whether an action is correct. Contextual correctness remains the responsibility of upstream decision systems and human reviewers.

What CerbaSeal is not
Not these things
  • a full AI governance platform
  • a model training or evaluation system
  • a legal compliance certification tool
  • a SaaS product
  • a replacement for security review
  • production-certified for client deployment
  • a substitute for legal or regulatory advice
  • a generic monitoring dashboard
  • a complete enterprise deployment package
Accurately described as
  • a deterministic enforcement gate
  • a structural authority boundary layer
  • a proof-of-enforcement system
  • a self-explaining, self-verifying core
  • a pilot-ready enforcement primitive
  • domain-agnostic and workflow-agnostic
  • fail-closed by design
  • reviewable without founder involvement
Current maturity
Currently implemented
  • deterministic execution gate
  • ALLOW / HOLD / REJECT outcomes
  • 12 named invariant checks
  • 17 reason codes
  • evidence bundle service
  • append-only hash-linked audit log
  • replay service
  • export manifest
  • diagnostic report service
  • operator action reports
  • system health verification
  • browser demo (live)
  • 323 passing tests (15 test files)
Not yet implemented / requires pilot
  • client-specific workflow binding
  • client infrastructure deployment
  • third-party security review
  • production monitoring
  • multi-client support model
  • formal SLA
  • external audit certification
  • persistent audit storage
  • cryptographic signing
  • identity provider integration
  • real client data integration
Core enforcement scenarios — run live
→ REJECT
AI tries to act without authority
An AI actor with an AI-sourced proposal attempts to authorize execution. CerbaSeal blocks unconditionally.
Invariant: INV-05 AI_NON_AUTHORITATIVE
Reason code: AI_CANNOT_AUTHORIZE
→ HOLD
Human submits action without required approval
The request is structurally valid but required approval is absent. Execution pauses, not rejected.
Invariant: INV-03 NO_REQUIRED_APPROVAL_NO_RELEASE
Reason code: REQUIRED_APPROVAL_MISSING
→ ALLOW
Approved action with valid provenance and approval
All 12 invariants pass. A release authorization is issued. This is the only path to execution.
Invariant: all 12 pass
Reason code: DECISION_ALLOWED
Why this matters
Decision control

Every proposed action must pass a complete invariant check before execution. There is no path around the gate.

Auditability

Every outcome — including REJECT and HOLD — produces a hash-linked evidence bundle. Nothing is silently discarded.

Replayability

Stored evidence can be replayed through the gate. Replay must match the original outcome deterministically.

Non-bypassability

Gate results are registered in a module-private WeakSet. Externally constructed results are rejected by the evidence layer.

Human authority preservation

AI actors cannot authorize their own AI-sourced proposals under any condition. Human approval is a structural requirement, not a configuration flag.

External reviewability

Every claim is backed by code, tests, or visible demo output. Nothing in this portal is aspirational without a label.

Current proof — claims and backing
Claim Backed by
AI cannot produce authority-bearing actions Code Tests Demo
Required human approval cannot be bypassed Code Tests Demo
Approval artifacts must be bound to the specific request Code Tests
Forged gate results cannot enter the evidence layer Code Tests
Unexpected runtime errors fail closed Code Tests
All outcomes produce hash-linked evidence Code Tests Demo
Replay of evidence produces identical outcomes Code Tests Demo
323 tests passing, 0 failing (15 test files) Tests
Reviewer quick path
  1. Read the one-page system definition: docs/one-page.md
  2. Run the REJECT scenario above — confirm AI self-authorization is blocked
  3. Run the HOLD scenario above — confirm missing approval pauses execution
  4. Run the ALLOW scenario above — confirm approved action produces release authorization
  5. Inspect evidence output: check evidenceBundleId, auditEventCount, chainVerified, replayMatchedOriginal
  6. Review the Security page — implemented controls and review questions
  7. Review the Pilot page — what is ready and what requires client definition
  8. Review the Deployment page — deployment posture and options
  9. Run the full test suite: pnpm test
  10. Run support readiness validation: pnpm demo:support:validate
Running CerbaSeal Without the Author

Everything required to verify CerbaSeal is in this repository. No external services, accounts, or founder involvement needed.

Commands
  • Start demo: pnpm demo:web
  • Run tests: pnpm test
  • Validate demo: pnpm demo:web:validate
  • Validate review portal: pnpm review:validate
Where to start
  • docs/00-external-reviewer-brief.md
  • docs/demo/enforcement-loop.md
  • examples/browser-demo/
What to verify
  • scenarios produce expected outcomes
  • test suite passes
  • audit chain verifies
  • replay matches

Note: No external dependencies or services required to run core demo.

CerbaSeal System Identity
Mark and Name

The CerbaSeal mark is a three-headed guardian representing the three enforcement outcomes: ALLOW, HOLD, and REJECT. The keyhole in the shield body represents controlled execution — nothing passes this boundary without authorization. The name combines Cerberus (the guardian of a threshold) with Seal (the enforcement act of sealing an outcome with evidence).

All claims in this portal are backed by code, tests, or visible demo output.

Current limitation notice: This is a review-ready core demo, not a production client deployment. It does not provide production monitoring, SLA, managed hosting, cryptographic signing, persistent storage, or legal certification.