What is ready now. What requires a client. What requires agreement.
core_ready_client_specifics_required
Pilot thesis
"First pilot should be one client, one workflow, one decision path,
one approval model, one enforcement promise, and one verifiable outcome."
CerbaSeal's enforcement core is ready. What is not yet defined is the client-specific layer:
which workflow, which action classes, which approval model, which deployment environment,
and what the pilot success criteria are. Those are defined in collaboration with a client —
not in this repository.
Most likely first pilot category: AI-assisted decision governance in a regulated or
regulation-exposed workflow where an AI system proposes actions that require structured
human approval before execution.
What is ready and what is not
Ready now
deterministic enforcement gate
ALLOW / HOLD / REJECT outcomes
evidence bundle generation
hash-linked audit log
replay validation
diagnostic reports
operator action guidance
system health verification
integrity verification
browser demo
323 passing tests (15 test files)
pilot-safe mode documentation
Requires client definition
client workflow name and scope
upstream proposal source
downstream action system
action taxonomy
policy pack reference
provenance source
approval model and authority classes
logging / evidence destination
deployment environment
sensitive data flag scope
prohibited-use conditions
pilot success criteria
Pilot intake checklist — items to define before a pilot begins
workflow name and description
decision action being governed
permitted action classes
approval required: yes / no
prohibited-use conditions
sensitive data flag scope
provenance fields (model, ruleset, hash)
policy pack version
control validity requirements
trust state source
evidence retention expectations
who reviews HOLD outcomes
who receives diagnostic reports
who owns deployment
who owns support
who signs off on success
Proposed pilot success criteria
These are non-binding sample criteria. Actual criteria would be agreed with the pilot client.
CerbaSeal processes defined workflow inputs deterministically
All scenario outputs are stable across repeated runs
AI cannot produce authority-bearing action in any test case
Required approval paths HOLD until approval is present
Approved action produces release authorization
Evidence bundle is created for every evaluation
Replay matches original outcome
Audit chain verifies
Client can understand diagnostic output without founder involvement
Pilot scope boundaries
In scope for pilot
one defined workflow
fixed action classes
enforcement evaluation
evidence generation
operator guidance from reason codes
bug fixes within scope
scenario clarification
configuration adjustment
deployment assistance
Out of scope for pilot
new workflow classes
new feature development
new integration layer
multi-client support
production monitoring
indefinite support
open-ended custom development
legal or compliance certification
Items requiring a working agreement
commercial terms
ownership of evidence records
liability boundary
support period and scope
payment and billing
data processing agreement (if applicable)
version freeze and update process
Commercial terms, ownership, liability, support, and payment would need to be defined
in a working agreement before pilot execution. This portal does not include pricing,
revenue terms, or commercial commitments.
Current limitation notice:
This is a review-ready core demo, not a production client deployment.
No pilot client currently exists. All pilot shapes described here are proposed, not contracted.