Hope-based governance
Agents run with full access. You hope they follow instructions. There’s no audit trail, no rollback, and no proof of what happened when things go sideways.
AI agents are getting remarkably capable. They read your codebase, write production code, call APIs, deploy services, and manage infrastructure. That’s the good news.
The bad news? Nobody is governing them. There’s no audit trail, no enforceable policy, and no scope boundary that an agent can’t accidentally wander past. When something goes wrong — and it will — you have no proof of what happened and no mechanism to prevent it next time.
LumenFlow changes this. It inserts a governance kernel between your agents and everything they touch.
Today’s AI agent workflows rely on one of three approaches. All three break down as you scale.
Hope-based governance
Agents run with full access. You hope they follow instructions. There’s no audit trail, no rollback, and no proof of what happened when things go sideways.
Prompt-based trust
Frameworks like LangChain and CrewAI define agent roles in prompts. But a prompt is a suggestion, not a wall — agents can (and do) ignore them when they hallucinate or over-optimize.
After-the-fact enforcement
GitHub Actions and CI pipelines catch problems after they’ve already happened. By the time a check fails, the agent has written files, made commits, and potentially pushed to production. LumenFlow prevents this by enforcing policy before the tool call executes — not in CI minutes later, after the damage is done.
The common thread? None of these provide real-time governance — enforcement that happens before each action, not after the damage is done.
LumenFlow’s kernel provides four guarantees that can’t be bypassed, turned off, or worked around:
Every agent action is checked against a 4-level permission model. Workspace, lane, task, and tool scopes must all agree before anything happens. One “no” at any level and the action is blocked — no exceptions.
Policies are evaluated in a layered cascade where strictness only goes one direction — up.
.env access can’t be overridden by a taskgit push --force applies to every agent in that laneThink of it as the principle of least privilege, enforced structurally rather than by convention.
Every tool call produces an immutable, content-addressed audit record — what was requested, what scopes were checked, what policies were evaluated, and what the outcome was. This happens whether the action succeeds or fails.
This isn’t logging. It’s cryptographic proof of execution that can be independently verified months or years later.
Agents run inside a bwrap sandbox with write confinement and deny overlays on secrets. This is real isolation — enforced by the Linux kernel, not by the agent runtime. Agents can’t escape the sandbox even if they try, because the operating system won’t let them.
| Aspect | No Governance | Prompt-Only (CrewAI) | CI-Only (GitHub Actions) | LumenFlow |
|---|---|---|---|---|
| Governance | None | Role descriptions | Post-hoc checks | 4-level scope, deny-wins |
| Evidence | None | Conversation logs | CI logs | Immutable receipts |
| Isolation | None | None | Container-level | OS-level bwrap sandboxes |
| Policy | None | Prompt suggestions | Pipeline rules | Real-time deny-wins cascade |
| Extensibility | N/A | Custom tools | Marketplace actions | Pack system with scoped policies |
The kernel is domain-agnostic on purpose. It doesn’t know about software delivery, infrastructure provisioning, or compliance auditing. Domain knowledge comes from packs — pluggable extensions that teach the kernel how to work in your world.
Software Delivery Pack
Ships built-in. Work Units, lanes, gates, worktrees, memory, and 100+ CLI commands. This is what we use to build LumenFlow itself — we eat our own cooking.
Build Your Own
Customer support, data pipelines, infrastructure provisioning, compliance auditing — if you can define the rules, you can build a pack.
It’s essential when:
You might not need it when:
Four principles guide everything we build:
For Humans
Step-by-step setup guide for developers.
For AI Agents
Canonical onboarding for AI coding assistants.
Keep reading: