Evidence-Based Decision Operating System
Status: Conceptual design. No reference implementation.
This repository documents structure, not software.
EBDOS is a decision-centric operating model and governance framework for human–AI automation. It treats decisions and fallbacks as first-class objects so that systems can detect misalignment before incidents occur, rather than explaining failures after the fact.
Instead of optimizing model performance or automation rate, EBDOS focuses on how much automation humans can confidently allow, and how that confidence changes as reality shifts.
Modern automation and AI systems rarely fail suddenly.
They usually fail like this:
-
rules are still “correct,” but reality has changed
-
human approvals and overrides quietly increase
-
exceptions accumulate without a clear pattern
-
incidents are analyzed only after damage occurs
Most systems are designed to execute fast and explain later.
Very few are designed to notice when automation is starting to feel unsafe.
(but definition, confidence, and alignment)
In practice, automation degrades before it breaks.
Early signals appear as:
-
increasing approvals
-
manual overrides
-
ad-hoc exceptions
-
latency-driven workarounds
These are usually treated as noise or operational inconvenience.
EBDOS treats them differently:
They are signals that definitions and instructions no longer fit reality.
EBDOS is built on two core shifts.
Every request is anchored to a decision that binds:
-
evidence
-
policy version
-
approval mode (automatic / human)
-
execution attempts
-
outcomes and post-action review
This makes every action replayable, auditable, and explainable without reconstructing logs after the fact.
A fallback does not mean “the system failed.”
It means:
-
automation became uncomfortable
-
constraints were reached
-
reality resisted the policy
Fallbacks are therefore used to measure the boundary of safe automation, not to hide it.
-
Requests are evaluated and anchored as decisions
-
Automatic execution is the default path
-
Fallbacks and overrides are recorded explicitly
-
When thresholds are exceeded, human review is triggered
-
Policies are adjusted and re-compiled into automation
The system keeps moving fast — but becomes cautious before harm occurs.
These steps describe control flow, not implementation details.
-
System-1: fast, automatic execution under well-defined conditions
-
System-2: slow, human-governed review when confidence degrades
Fallbacks and SLO breaches are the bridge between them.
System-2 intervenes only when needed, updates policy, then steps back.
Detailed diagrams and scenarios are documented separately:
-
Runtime Decision Flow (execution & operation)
-
Meaning → Policy Compile boundary
-
Counterexample → Change Review loop
-
“Before-incident” timeline with System-1/System-2 overlay
See /docs for full diagrams and explanations.
-
❌ a finished product
-
❌ a new AI model
-
❌ a workflow automation tool
-
❌ a claim of optimal decisions
EBDOS does not try to define “the correct answer.”
It provides a structure for tracking when previous answers stop working.
This repository documents a design exploration, not a production system.
Some components are intentionally left abstract or unimplemented:
-
policy compilers
-
approval interfaces
-
domain-specific meaning trees
These depend on organizational context and should not be standardized prematurely. The abstractions here are intended to survive multiple implementations, not prescribe one.
This work is shared not because it is complete,
but because the failure patterns are already clear.
EBDOS captures those patterns and proposes a way to respond before incidents force the issue.
-
domain-specific applications
-
empirical validation in live systems
-
tooling for policy compilation and review
If automation is going to fail anyway,
it should fail early, visibly, and usefully.