Skip to content

Kernel Runtime

The KernelRuntime is the central component of LumenFlow. Every tool call — whether from an AI agent via MCP, a CLI command, or a programmatic API call — passes through the kernel before anything executes.

The kernel is domain-agnostic. It does not know about Work Units, lanes, or gates. Those concepts come from packs. The kernel provides four capabilities that any pack can build on:

  1. Scope intersection — permission enforcement
  2. Policy evaluation — deny-wins rule cascade
  3. Evidence recording — immutable audit trail
  4. Tool dispatch — routing calls to pack-provided handlers

When an agent calls a tool (e.g., file:write), the kernel executes a strict pipeline:

  1. Tool lookup. The kernel checks the tool registry. If the tool name is not registered by any loaded pack, the call fails with TOOL_NOT_FOUND.

  2. Scope resolution. The kernel reads four sets of allowed scopes — workspace, lane, task, and tool — and computes their intersection. If the intersection is empty, the call fails with SCOPE_DENIED.

  3. Reserved path check. Any attempt to write to .lumenflow/** is denied unconditionally. This protects the kernel’s own state from modification by tools.

  4. Input persistence. The tool’s input is serialized, hashed with SHA-256, and written to the evidence store as a content-addressed blob. This happens before execution — even if the tool fails, the input is recorded.

  5. Policy evaluation. The kernel’s policy engine evaluates all matching rules with the on_tool_request trigger. A single deny from any layer makes the decision final, and approval_required pauses execution before the tool handler runs.

  6. Input validation. If the tool declares a Zod input schema, the input is validated. Invalid inputs are rejected with INVALID_INPUT.

  7. Dispatch. The kernel routes the call to the tool’s handler — either an in-process function or a sandboxed subprocess. For details on what happens inside dispatch — including sandbox execution, the tool-runner worker, and the runtime CLI adapter pattern — see Tool Execution.

  8. Evidence recording. The result (success, failure, denied, or crashed) is appended to the evidence store with timing, scope enforcement details, and policy decisions.

Steps 2–5 are the authorization gate. If any check fails, the tool never executes, but the denial is still recorded as evidence.

The kernel is created by calling initializeKernelRuntime() with a workspace root path. During initialization:

  1. The workspace.yaml spec is read and hashed (SHA-256). This hash is checked on every subsequent executeTool call — if the spec changes while the kernel is running, all tool calls return SPEC_TAMPERED until the kernel is restarted.
  2. All packs referenced in the workspace spec are loaded, validated, and their tools registered.
  3. Pack-specific workspace config is validated against each pack’s declared config_schema and made available to runtime capability and policy factories.
  4. The policy engine is constructed with a four-layer stack built from pack-declared policies plus any runtime-authored rules returned by pack policy_factory hooks.
  5. The evidence store is initialized and any orphaned traces from a previous crash are reconciled.

The kernel manages tasks through an event-sourced state machine:

ready → active → done

       blocked
MethodTransitionPolicy trigger
createTask— (writes spec to disk)None
claimTaskready → activeon_claim
blockTaskactive → blockedNone
unblockTaskblocked → activeNone
completeTaskactive → doneon_completion

Task specs are immutable YAML files stored at .lumenflow/kernel/tasks/<id>.yaml. State is projected from an append-only event log — there is no mutable state file.

Agent orchestration can stay above the kernel while still using kernel-enforced governance. A host can:

  1. call agent:execute-turn
  2. inspect the governed turn output for status: tool_request
  3. call the requested tool through executeTool() with execution_metadata.agent_intent
  4. feed the tool result back into the next agent:execute-turn

This keeps the model-turn loop in host code, but the actual tool request still passes through scope enforcement, pack policies, approvals, and evidence recording. Hosts can also pass agent_turn_index and agent_tool_call_count in execution_metadata to enforce loop limits consistently across turns.

Packs can declare a config_key and config_schema in their manifest. During runtime startup, the kernel validates the workspace config payload for that pack and threads the resolved value into:

  • capability factories — for config-aware required_scopes and required_env
  • policy factories — for runtime-authored conditional rules

This is how the agent-runtime pack derives provider host allowlists, validates credential env references, and turns configured intents into kernel-enforced policy decisions.

Hosts do not need to reconstruct tool gating in application code. They can ask the runtime for the already-filtered tool set:

const governedTools = await runtime.getToolHost().listGovernedTools({
  ...ctx,
  metadata: {
    agent_intent: 'scheduling',
  },
});

listGovernedTools() applies the same scope intersection and policy evaluation used during real execution. The result includes allow and approval_required entries but excludes denied and out-of-scope tools.

Framework orchestration can remain above the kernel, but packs can still own their domain runtime state. The agent-runtime pack uses this boundary to persist:

  • linear agent-session resume state
  • workflow branch and join readiness
  • scheduled wakeups
  • continuation records explaining why a session resumed

Those orchestration records live in pack state under .agent-runtime/workflow/, while the kernel continues to own approval state, tool execution, and immutable evidence receipts.

The kernel stores all state under .lumenflow/kernel/ in the workspace root:

.lumenflow/kernel/
  tasks/              ← immutable task specs (one YAML per task)
  events/
    events.jsonl      ← append-only event log
    events.lock       ← file-based mutex
  evidence/
    traces/
      tool-traces.jsonl           ← active trace file
      tool-traces.00000001.jsonl  ← compacted segment
    inputs/
      <sha256-hex>    ← content-addressed input blobs

All files are append-only or write-once. The kernel never modifies existing records — it only appends new entries or creates new files.

When a tool call fails at any stage in the pipeline, the kernel returns a structured error:

CodeStageMeaning
TOOL_NOT_FOUNDLookupTool name not registered by any pack
SCOPE_DENIEDAuthorizationScope intersection is empty or reserved path violation
POLICY_DENIEDAuthorizationPolicy engine returned a deny decision
APPROVAL_REQUIREDAuthorizationPolicy engine paused execution pending human approval
SPEC_TAMPEREDPre-checkWorkspace spec changed since kernel startup
INVALID_INPUTValidationInput failed the tool’s schema
INVALID_OUTPUTPost-executionOutput failed schema normalization
TOOL_EXECUTION_FAILEDExecutionHandler threw an exception