Kernel Runtime
The KernelRuntime is the central component of LumenFlow. Every tool call — whether from an AI agent via MCP, a CLI command, or a programmatic API call — passes through the kernel before anything executes.
The kernel is domain-agnostic. It does not know about Work Units, lanes, or gates. Those concepts come from packs. The kernel provides four capabilities that any pack can build on:
- Scope intersection — permission enforcement
- Policy evaluation — deny-wins rule cascade
- Evidence recording — immutable audit trail
- Tool dispatch — routing calls to pack-provided handlers
What Happens When a Tool Is Called
Section titled “What Happens When a Tool Is Called”When an agent calls a tool (e.g., file:write), the kernel executes a strict pipeline:
-
Tool lookup. The kernel checks the tool registry. If the tool name is not registered by any loaded pack, the call fails with
TOOL_NOT_FOUND. -
Scope resolution. The kernel reads four sets of allowed scopes — workspace, lane, task, and tool — and computes their intersection. If the intersection is empty, the call fails with
SCOPE_DENIED. -
Reserved path check. Any attempt to write to
.lumenflow/**is denied unconditionally. This protects the kernel’s own state from modification by tools. -
Input persistence. The tool’s input is serialized, hashed with SHA-256, and written to the evidence store as a content-addressed blob. This happens before execution — even if the tool fails, the input is recorded.
-
Policy evaluation. The kernel’s policy engine evaluates all matching rules with the
on_tool_requesttrigger. A single deny from any layer makes the decision final, andapproval_requiredpauses execution before the tool handler runs. -
Input validation. If the tool declares a Zod input schema, the input is validated. Invalid inputs are rejected with
INVALID_INPUT. -
Dispatch. The kernel routes the call to the tool’s handler — either an in-process function or a sandboxed subprocess. For details on what happens inside dispatch — including sandbox execution, the tool-runner worker, and the runtime CLI adapter pattern — see Tool Execution.
-
Evidence recording. The result (success, failure, denied, or crashed) is appended to the evidence store with timing, scope enforcement details, and policy decisions.
Steps 2–5 are the authorization gate. If any check fails, the tool never executes, but the denial is still recorded as evidence.
Initialization
Section titled “Initialization”The kernel is created by calling initializeKernelRuntime() with a workspace root path. During initialization:
- The
workspace.yamlspec is read and hashed (SHA-256). This hash is checked on every subsequentexecuteToolcall — if the spec changes while the kernel is running, all tool calls returnSPEC_TAMPEREDuntil the kernel is restarted. - All packs referenced in the workspace spec are loaded, validated, and their tools registered.
- Pack-specific workspace config is validated against each pack’s declared
config_schemaand made available to runtime capability and policy factories. - The policy engine is constructed with a four-layer stack built from pack-declared policies plus any runtime-authored rules returned by pack
policy_factoryhooks. - The evidence store is initialized and any orphaned traces from a previous crash are reconciled.
Task Lifecycle
Section titled “Task Lifecycle”The kernel manages tasks through an event-sourced state machine:
| Method | Transition | Policy trigger |
|---|---|---|
createTask | — (writes spec to disk) | None |
claimTask | ready → active | on_claim |
blockTask | active → blocked | None |
unblockTask | blocked → active | None |
completeTask | active → done | on_completion |
Task specs are immutable YAML files stored at .lumenflow/kernel/tasks/<id>.yaml. State is projected from an append-only event log — there is no mutable state file.
Host-Driven Agent Loops
Section titled “Host-Driven Agent Loops”Agent orchestration can stay above the kernel while still using kernel-enforced governance. A host can:
- call
agent:execute-turn - inspect the governed turn output for
status: tool_request - call the requested tool through
executeTool()withexecution_metadata.agent_intent - feed the tool result back into the next
agent:execute-turn
This keeps the model-turn loop in host code, but the actual tool request still passes through scope enforcement, pack policies, approvals, and evidence recording. Hosts can also pass agent_turn_index and agent_tool_call_count in execution_metadata to enforce loop limits consistently across turns.
Runtime-Visible Pack Config
Section titled “Runtime-Visible Pack Config”Packs can declare a config_key and config_schema in their manifest. During runtime startup, the
kernel validates the workspace config payload for that pack and threads the resolved value into:
- capability factories — for config-aware
required_scopesandrequired_env - policy factories — for runtime-authored conditional rules
This is how the agent-runtime pack derives provider host allowlists, validates credential env
references, and turns configured intents into kernel-enforced policy decisions.
Policy-Aware Tool Discovery
Section titled “Policy-Aware Tool Discovery”Hosts do not need to reconstruct tool gating in application code. They can ask the runtime for the already-filtered tool set:
listGovernedTools() applies the same scope intersection and policy evaluation used during real
execution. The result includes allow and approval_required entries but excludes denied and
out-of-scope tools.
Pack-Owned Agent-Session Orchestration
Section titled “Pack-Owned Agent-Session Orchestration”Framework orchestration can remain above the kernel, but packs can still own their domain runtime
state. The agent-runtime pack uses this boundary to persist:
- linear
agent-sessionresume state - workflow branch and join readiness
- scheduled wakeups
- continuation records explaining why a session resumed
Those orchestration records live in pack state under .agent-runtime/workflow/, while the kernel
continues to own approval state, tool execution, and immutable evidence receipts.
Filesystem Layout
Section titled “Filesystem Layout”The kernel stores all state under .lumenflow/kernel/ in the workspace root:
All files are append-only or write-once. The kernel never modifies existing records — it only appends new entries or creates new files.
Tool Error Codes
Section titled “Tool Error Codes”When a tool call fails at any stage in the pipeline, the kernel returns a structured error:
| Code | Stage | Meaning |
|---|---|---|
TOOL_NOT_FOUND | Lookup | Tool name not registered by any pack |
SCOPE_DENIED | Authorization | Scope intersection is empty or reserved path violation |
POLICY_DENIED | Authorization | Policy engine returned a deny decision |
APPROVAL_REQUIRED | Authorization | Policy engine paused execution pending human approval |
SPEC_TAMPERED | Pre-check | Workspace spec changed since kernel startup |
INVALID_INPUT | Validation | Input failed the tool’s schema |
INVALID_OUTPUT | Post-execution | Output failed schema normalization |
TOOL_EXECUTION_FAILED | Execution | Handler threw an exception |
Next Steps
Section titled “Next Steps”- Scope Intersection — How the 4-level permission model works
- Policy Engine — How deny-wins evaluation works
- Evidence Store — How the audit trail is built
- Packs — How tools are declared and loaded
- Create a Pack — Build your own domain pack