Skip to content

Why LumenFlow?

AI agents are getting remarkably capable. They read your codebase, write production code, call APIs, deploy services, and manage infrastructure. That’s the good news.

The bad news? Nobody is governing them. There’s no audit trail, no enforceable policy, and no scope boundary that an agent can’t accidentally wander past. When something goes wrong — and it will — you have no proof of what happened and no mechanism to prevent it next time.

LumenFlow changes this. It inserts a governance kernel between your agents and everything they touch.

Today’s AI agent workflows rely on one of three approaches. All three break down as you scale.

Hope-based governance

Agents run with full access. You hope they follow instructions. There’s no audit trail, no rollback, and no proof of what happened when things go sideways.

Prompt-based trust

Frameworks like LangChain and CrewAI define agent roles in prompts. But a prompt is a suggestion, not a wall — agents can (and do) ignore them when they hallucinate or over-optimize.

After-the-fact enforcement

GitHub Actions and CI pipelines catch problems after they’ve already happened. By the time a check fails, the agent has written files, made commits, and potentially pushed to production. LumenFlow prevents this by enforcing policy before the tool call executes — not in CI minutes later, after the damage is done.

The common thread? None of these provide real-time governance — enforcement that happens before each action, not after the damage is done.

LumenFlow’s kernel provides four guarantees that can’t be bypassed, turned off, or worked around:

Every agent action is checked against a 4-level permission model. Workspace, lane, task, and tool scopes must all agree before anything happens. One “no” at any level and the action is blocked — no exceptions.

Agent requests: file:write("src/auth/login.ts")
  ✓ Workspace scope: src/** allowed
  ✓ Lane scope: src/auth/** allowed (Framework: Core lane)
  ✓ Task scope: src/auth/** allowed (WU code_paths)
  ✓ Tool scope: file:write allowed
  → PERMITTED

Policies are evaluated in a layered cascade where strictness only goes one direction — up.

  • A workspace policy that denies .env access can’t be overridden by a task
  • A lane policy that blocks git push --force applies to every agent in that lane
  • Pack policies (like quality gates) are enforced by the kernel, not left to the agent’s discretion

Think of it as the principle of least privilege, enforced structurally rather than by convention.

Every tool call produces an immutable, content-addressed audit record — what was requested, what scopes were checked, what policies were evaluated, and what the outcome was. This happens whether the action succeeds or fails.

This isn’t logging. It’s cryptographic proof of execution that can be independently verified months or years later.

Agents run inside a bwrap sandbox with write confinement and deny overlays on secrets. This is real isolation — enforced by the Linux kernel, not by the agent runtime. Agents can’t escape the sandbox even if they try, because the operating system won’t let them.

AspectNo GovernancePrompt-Only (CrewAI)CI-Only (GitHub Actions)LumenFlow
GovernanceNoneRole descriptionsPost-hoc checks4-level scope, deny-wins
EvidenceNoneConversation logsCI logsImmutable receipts
IsolationNoneNoneContainer-levelOS-level bwrap sandboxes
PolicyNonePrompt suggestionsPipeline rulesReal-time deny-wins cascade
ExtensibilityN/ACustom toolsMarketplace actionsPack system with scoped policies

The kernel is domain-agnostic on purpose. It doesn’t know about software delivery, infrastructure provisioning, or compliance auditing. Domain knowledge comes from packs — pluggable extensions that teach the kernel how to work in your world.

Software Delivery Pack

Ships built-in. Work Units, lanes, gates, worktrees, memory, and 100+ CLI commands. This is what we use to build LumenFlow itself — we eat our own cooking.

Explore the Pack →

Build Your Own

Customer support, data pipelines, infrastructure provisioning, compliance auditing — if you can define the rules, you can build a pack.

Create a Pack →

It’s essential when:

  • AI agents touch your codebase, infrastructure, or production services
  • You need an audit trail of what agents did and why
  • Policy enforcement must be real — not just “please don’t”
  • You need to prove compliance to regulators, customers, or your future self
  • Multiple agents work in parallel and need coordination boundaries

You might not need it when:

  • You only use AI for chat-based Q&A with no tool access
  • Agents operate in fully disposable, throwaway environments
  • Your existing CI/CD pipeline gives you enough governance for your risk tolerance

Four principles guide everything we build:

  1. Policy as code — Governance rules are declarative configurations, not verbal agreements or wiki pages
  2. Evidence over claims — Immutable receipts prove what happened. Status updates and ticket comments don’t.
  3. Continuous governance — Every action is governed in real-time, not reviewed after the sprint ends
  4. Agents as governed participants — Agents get the same accountability mechanisms as humans. First-class team members, first-class oversight.

Keep reading: