Tool Execution
The Kernel Runtime page explains the 8-step pipeline that every tool call passes through. This page explains what happens inside step 7 (dispatch) — how the kernel routes a tool call into a sandboxed subprocess, how the pack’s tool implementation runs, and how the runtime CLI adapter bridges ~80 tools to existing CLI command modules.
Three-Layer Execution Stack
Section titled “Three-Layer Execution Stack”Tool execution flows through three layers. The kernel dispatches into a sandboxed subprocess, where the pack’s tool-runner worker loads the tool implementation, which either calls the runtime CLI adapter or executes directly.
| Layer | Package | Responsibility |
|---|---|---|
| Kernel | @lumenflow/kernel | Domain-agnostic orchestration: scope enforcement, policy evaluation, evidence recording, sandbox dispatch |
| Pack | software-delivery | Tool implementations: 90+ exported functions grouped by domain (WU lifecycle, memory, git, etc.) |
| CLI | @lumenflow/cli | Business logic: each command module exports a main() function that performs the actual work |
Complete Execution Trace
Section titled “Complete Execution Trace”Here is the full path a tool call takes, using wu:create as an example.
Two Implementation Paths
Section titled “Two Implementation Paths”The Software Delivery Pack uses two strategies for implementing its 90+ tools.
CLI Adapter Path (~80 tools)
Section titled “CLI Adapter Path (~80 tools)”Most tools delegate to existing CLI command modules via the runtime CLI adapter. The tool implementation function builds an argument array and calls runtimeCliAdapter.run(command, args), which dynamically imports the corresponding CLI module and calls its main() function.
Tool implementation files that use this path:
| File | Domain | Tools |
|---|---|---|
wu-lifecycle-tools.ts | WU create, claim, prep, done, block, edit, etc. | ~21 |
runtime-native-tools.ts | File read/write/edit/delete, config, validation, etc. | ~20 |
memory-tools.ts | Checkpoint, inbox, signal, recover, etc. | ~14 |
initiative-orchestration-tools.ts | Initiative create, plan, status, orchestrate, etc. | ~11 |
agent-tools.ts | Agent session management, delegation | ~8 |
flow-metrics-tools.ts | Flow reports, bottleneck analysis, metrics | ~6 |
Direct Implementation Path (~10 tools)
Section titled “Direct Implementation Path (~10 tools)”A smaller set of tools are self-contained implementations that do not use the CLI adapter. These use simple-git wrappers or Node builtins directly.
| File | Domain | Why direct |
|---|---|---|
git-tools.ts | git:add, git:status, git:commit | Needs fine-grained control over git binary invocation and output parsing |
worktree-tools.ts | Worktree creation and management | Uses spawnSync for worktree operations |
lane-lock.ts | Lane lock acquisition and release | Lightweight file-based locking, no CLI command equivalent |
Direct implementations follow the same ToolOutput interface and receive the same ToolRunnerWorkerContext — the kernel treats both paths identically.
Runtime CLI Adapter Deep Dive
Section titled “Runtime CLI Adapter Deep Dive”The runtime CLI adapter (runtime-cli-adapter.ts) is the bridge that lets pack tools reuse CLI command modules without spawning a child process for each command. It loads CLI modules in-process and captures their output.
Serialized Execution
Section titled “Serialized Execution”All adapter invocations are serialized through runExclusive() — a promise-based queue that ensures only one CLI module runs at a time. This prevents concurrent mutations to process.argv and other patched globals.
What Gets Patched
Section titled “What Gets Patched”Before calling a CLI module’s main(), the adapter patches six global surfaces:
| Global | Patch | Why |
|---|---|---|
process.argv | Set to [execPath, command, ...args] | CLI modules parse arguments from process.argv |
process.exit | Replaced with a function that throws RuntimeCliExitSignal | Prevents the subprocess from actually exiting; captures the exit code instead |
process.stdout.write | Redirected to a capture buffer | Captures stdout output as a string |
process.stderr.write | Redirected to a capture buffer | Captures stderr output as a string |
console.log/info/debug | Redirected to stdout capture buffer | Captures console output that would otherwise go to the real stdout |
console.error/warn | Redirected to stderr capture buffer | Captures console errors that would otherwise go to the real stderr |
All patches are restored in a finally block, guaranteeing cleanup even when the CLI module throws.
The RuntimeCliExitSignal Trick
Section titled “The RuntimeCliExitSignal Trick”CLI modules call process.exit(code) to signal success or failure. In a normal Node process, this terminates the process. The adapter replaces process.exit with a function that throws a RuntimeCliExitSignal error:
The adapter’s try/catch block recognizes this error and captures the exit code without terminating the worker process.
Module Resolution
Section titled “Module Resolution”CLI modules are resolved relative to the workspace root:
For example, wu-create resolves to packages/@lumenflow/cli/src/wu-create.js. The adapter converts this to a file:// URL and uses dynamic import() to load the module.
The Sandbox Boundary
Section titled “The Sandbox Boundary”The kernel uses bwrap (bubblewrap) to create a sandboxed subprocess for each tool execution. This provides OS-level isolation.
The invocation payload sent on stdin contains:
| Field | Purpose |
|---|---|
tool_name | Which tool to execute |
handler_entry | Module path and export name (e.g., tool-impl/wu-lifecycle-tools.ts#wuCreateTool) |
input | The tool’s input data |
scope_enforced | The computed scope intersection (for the tool to reference) |
receipt_id | UUID linking this execution to the evidence trace |
The response on stdout is a JSON object with an output field containing the standard ToolOutput structure (success, data, error, metadata).
Sandbox Profile
Section titled “Sandbox Profile”The bwrap sandbox profile is constructed from the enforced scopes. Write-scoped paths get read-write bind mounts; read-scoped paths get read-only bind mounts. Paths outside the scope intersection are not mounted — the tool literally cannot see them.
Platform-Specific Sandbox Backends
Section titled “Platform-Specific Sandbox Backends”The kernel dispatches tool execution through platform-specific sandbox backends. Each backend enforces the same SandboxProfile contract — scoped filesystem mounts, network posture, and deny overlays — but uses the native isolation primitive available on that OS.
Linux: bwrap (bubblewrap)
Section titled “Linux: bwrap (bubblewrap)”On Linux, the kernel uses bwrap (bubblewrap) to create a user-namespace sandbox. The SandboxProfile is translated into bwrap flags:
| Profile field | bwrap behavior |
|---|---|
readonly_bind_mounts | --ro-bind source target per mount |
writable_bind_mounts | --bind source target per mount |
deny_overlays | --bind /dev/null path (files) or --tmpfs path (directories) |
network_posture: off | --unshare-net |
network_posture: full | No network flag (default allows network) |
Network allowlist on Linux
Section titled “Network allowlist on Linux”When network_posture is allowlist, bwrap still uses --unshare-net to create an isolated network namespace, then runs an iptables script inside the sandbox before executing the tool command. The script is built by buildIptablesAllowlistScript():
- Allow loopback traffic (
-o lo -j ACCEPT) - Allow established/related connections (
--state ESTABLISHED,RELATED -j ACCEPT) - Per-entry ACCEPT rules for each allowlisted host:port or CIDR
- Default REJECT with
icmp-port-unreachable(producesECONNREFUSEDat the application level)
The entire iptables setup and tool command are wrapped in a sh -c invocation so iptables rules are applied before the tool runs.
macOS: sandbox-exec (SBPL)
Section titled “macOS: sandbox-exec (SBPL)”On macOS, the kernel uses the built-in sandbox-exec command with a Seatbelt Profile Language (SBPL) policy string. The backend builds the policy from the SandboxProfile:
Posture-aware network rules
Section titled “Posture-aware network rules”The macOS backend generates network SBPL rules based on the profile’s network posture:
| Posture | SBPL rules generated |
|---|---|
off | (deny network*) |
allowlist | (deny network*) + (allow network-outbound (remote ip "host")) per entry |
full | (allow network*) |
Read confinement
Section titled “Read confinement”Rather than granting broad (allow file-read*), the macOS backend generates scoped read rules for specific paths:
- Workspace root — the project directory (covers worktree, state, WU YAML)
- System paths —
/usr,/System,/Library,/bin,/sbin,/private,/dev,/tmp - Temp path — from the sandbox profile
Each allowed path uses (allow file-read* (subpath "...")) to restrict reads to known directories.
Deny overlays for sensitive paths
Section titled “Deny overlays for sensitive paths”After the allow rules, the backend emits deny rules for sensitive home-directory paths:
~/.ssh— SSH keys and configuration~/.aws— AWS credentials and configuration~/.gnupg— GPG keys and keyrings
These deny rules use (deny file-read* (subpath "...")) and are placed after the allow rules. Because SBPL evaluates rules in order and the last matching rule wins, the deny overlays effectively block access to these directories even if they fall within a broader allow path.
Cross-platform parity
Section titled “Cross-platform parity”Both backends enforce the same security properties from the same SandboxProfile:
| Property | Linux (bwrap) | macOS (sandbox-exec) |
|---|---|---|
| Filesystem isolation | Bind mounts (paths not mounted are invisible) | SBPL subpath rules |
| Network blocking | --unshare-net | (deny network*) |
| Network allowlist | iptables OUTPUT rules | SBPL network-outbound rules |
| Deny overlays | /dev/null bind / --tmpfs | (deny file-read*) rules |
| Sensitive paths | ~/.ssh, ~/.aws, ~/.gnupg, .env | ~/.ssh, ~/.aws, ~/.gnupg |
Why This Architecture
Section titled “Why This Architecture”This architecture is an intentional incremental migration from CLI-first to pack-first, not accidental complexity.
-
LumenFlow started as a CLI tool. The original 80+ commands were standalone CLI scripts, each parsing
process.argvand writing to stdout. This was the fastest way to build and validate the workflow. -
The kernel introduced governance. When the kernel was added to provide scope enforcement, policy evaluation, and evidence recording, every tool call needed to pass through the kernel pipeline. But rewriting 80+ CLI commands as kernel-native tool implementations would have been a multi-month effort.
-
The runtime CLI adapter bridged the gap. Instead of rewriting everything, the adapter pattern lets pack tools call existing CLI modules in-process. The CLI modules do not know they are running inside a sandbox — they just call
main()and write to stdout as usual. -
New tools are written pack-native. The direct implementation path (git-tools, worktree-tools, lane-lock) shows the target architecture: self-contained functions that return
ToolOutputdirectly. Over time, more tools will migrate from the adapter path to direct implementations.
This design means:
- No big rewrite required — the system works today with full governance
- External contributors can add tools either way — via CLI commands (adapter path) or direct implementations
- Migration is incremental — each tool can be converted independently
Next Steps
Section titled “Next Steps”- Kernel Runtime — The 8-step pipeline that dispatches tools
- Package Architecture — How packages relate and how packs are distributed
- Packs — How domain tools are declared and loaded
- Create a Pack — Build your own domain pack