Skip to content

Tool Execution

The Kernel Runtime page explains the 8-step pipeline that every tool call passes through. This page explains what happens inside step 7 (dispatch) — how the kernel routes a tool call into a sandboxed subprocess, how the pack’s tool implementation runs, and how the runtime CLI adapter bridges ~80 tools to existing CLI command modules.

Tool execution flows through three layers. The kernel dispatches into a sandboxed subprocess, where the pack’s tool-runner worker loads the tool implementation, which either calls the runtime CLI adapter or executes directly.

Diagram
LayerPackageResponsibility
Kernel@lumenflow/kernelDomain-agnostic orchestration: scope enforcement, policy evaluation, evidence recording, sandbox dispatch
Packsoftware-deliveryTool implementations: 90+ exported functions grouped by domain (WU lifecycle, memory, git, etc.)
CLI@lumenflow/cliBusiness logic: each command module exports a main() function that performs the actual work

Here is the full path a tool call takes, using wu:create as an example.

Diagram

The Software Delivery Pack uses two strategies for implementing its 90+ tools.

Most tools delegate to existing CLI command modules via the runtime CLI adapter. The tool implementation function builds an argument array and calls runtimeCliAdapter.run(command, args), which dynamically imports the corresponding CLI module and calls its main() function.

Tool implementation files that use this path:

FileDomainTools
wu-lifecycle-tools.tsWU create, claim, prep, done, block, edit, etc.~21
runtime-native-tools.tsFile read/write/edit/delete, config, validation, etc.~20
memory-tools.tsCheckpoint, inbox, signal, recover, etc.~14
initiative-orchestration-tools.tsInitiative create, plan, status, orchestrate, etc.~11
agent-tools.tsAgent session management, delegation~8
flow-metrics-tools.tsFlow reports, bottleneck analysis, metrics~6

A smaller set of tools are self-contained implementations that do not use the CLI adapter. These use simple-git wrappers or Node builtins directly.

FileDomainWhy direct
git-tools.tsgit:add, git:status, git:commitNeeds fine-grained control over git binary invocation and output parsing
worktree-tools.tsWorktree creation and managementUses spawnSync for worktree operations
lane-lock.tsLane lock acquisition and releaseLightweight file-based locking, no CLI command equivalent

Direct implementations follow the same ToolOutput interface and receive the same ToolRunnerWorkerContext — the kernel treats both paths identically.

The runtime CLI adapter (runtime-cli-adapter.ts) is the bridge that lets pack tools reuse CLI command modules without spawning a child process for each command. It loads CLI modules in-process and captures their output.

All adapter invocations are serialized through runExclusive() — a promise-based queue that ensures only one CLI module runs at a time. This prevents concurrent mutations to process.argv and other patched globals.

// Simplified from runtime-cli-adapter.ts
let executionQueue: Promise<void> = Promise.resolve();

async function runExclusive<T>(operation: () => Promise<T>): Promise<T> {
  const scheduled = executionQueue.then(operation, operation);
  executionQueue = scheduled.then(
    () => undefined,
    () => undefined,
  );
  return scheduled;
}

Before calling a CLI module’s main(), the adapter patches six global surfaces:

GlobalPatchWhy
process.argvSet to [execPath, command, ...args]CLI modules parse arguments from process.argv
process.exitReplaced with a function that throws RuntimeCliExitSignalPrevents the subprocess from actually exiting; captures the exit code instead
process.stdout.writeRedirected to a capture bufferCaptures stdout output as a string
process.stderr.writeRedirected to a capture bufferCaptures stderr output as a string
console.log/info/debugRedirected to stdout capture bufferCaptures console output that would otherwise go to the real stdout
console.error/warnRedirected to stderr capture bufferCaptures console errors that would otherwise go to the real stderr

All patches are restored in a finally block, guaranteeing cleanup even when the CLI module throws.

CLI modules call process.exit(code) to signal success or failure. In a normal Node process, this terminates the process. The adapter replaces process.exit with a function that throws a RuntimeCliExitSignal error:

// Simplified from runtime-cli-adapter.ts
class RuntimeCliExitSignal extends Error {
  readonly exitCode: number;
  constructor(exitCode: number) {
    super(`CLI requested process exit (${exitCode})`);
    this.exitCode = exitCode;
  }
}

The adapter’s try/catch block recognizes this error and captures the exit code without terminating the worker process.

CLI modules are resolved relative to the workspace root:

packages/@lumenflow/cli/src/<command>.js

For example, wu-create resolves to packages/@lumenflow/cli/src/wu-create.js. The adapter converts this to a file:// URL and uses dynamic import() to load the module.

The kernel uses bwrap (bubblewrap) to create a sandboxed subprocess for each tool execution. This provides OS-level isolation.

Diagram

The invocation payload sent on stdin contains:

FieldPurpose
tool_nameWhich tool to execute
handler_entryModule path and export name (e.g., tool-impl/wu-lifecycle-tools.ts#wuCreateTool)
inputThe tool’s input data
scope_enforcedThe computed scope intersection (for the tool to reference)
receipt_idUUID linking this execution to the evidence trace

The response on stdout is a JSON object with an output field containing the standard ToolOutput structure (success, data, error, metadata).

The bwrap sandbox profile is constructed from the enforced scopes. Write-scoped paths get read-write bind mounts; read-scoped paths get read-only bind mounts. Paths outside the scope intersection are not mounted — the tool literally cannot see them.

This architecture is an intentional incremental migration from CLI-first to pack-first, not accidental complexity.

  1. LumenFlow started as a CLI tool. The original 80+ commands were standalone CLI scripts, each parsing process.argv and writing to stdout. This was the fastest way to build and validate the workflow.

  2. The kernel introduced governance. When the kernel was added to provide scope enforcement, policy evaluation, and evidence recording, every tool call needed to pass through the kernel pipeline. But rewriting 80+ CLI commands as kernel-native tool implementations would have been a multi-month effort.

  3. The runtime CLI adapter bridged the gap. Instead of rewriting everything, the adapter pattern lets pack tools call existing CLI modules in-process. The CLI modules do not know they are running inside a sandbox — they just call main() and write to stdout as usual.

  4. New tools are written pack-native. The direct implementation path (git-tools, worktree-tools, lane-lock) shows the target architecture: self-contained functions that return ToolOutput directly. Over time, more tools will migrate from the adapter path to direct implementations.

This design means:

  • No big rewrite required — the system works today with full governance
  • External contributors can add tools either way — via CLI commands (adapter path) or direct implementations
  • Migration is incremental — each tool can be converted independently