Create Your First Pack in 10 Minutes
A pack is a self-contained plugin that teaches the LumenFlow kernel how to
work in a specific domain. The built-in software-delivery pack provides git
tools, worktree isolation, and quality gates. You can create packs for any
domain — customer support, data pipelines, infrastructure provisioning, or
anything else your agents need to do.
This guide walks through every piece of a pack using the software-delivery
pack’s git:status tool as a concrete example. By the end you will understand
the manifest schema, tool implementation pattern, import boundary rules, scoping
model, policies, and integrity verification.
Recommended Path: No-Code First
Section titled “Recommended Path: No-Code First”pack:author is the fastest and safest way to create a pack now. It generates
manifest + tool implementation from vetted templates, then runs validation.
For automation, use a spec file:
Current secure templates:
file.read_textfile.write_texthttp.get_json
Validation expectations:
- Least-privilege scopes by default
- Security lint blocks unsafe scope combinations (for example wildcard writes)
- Network templates require safe URL schemes
- Generated packs should pass
pnpm pack:validatewithout manual edits
This guide continues with the lower-level, manual structure so you can customize advanced pack behavior when needed.
What Is in a Pack?
Section titled “What Is in a Pack?”Directorymy-pack/
- manifest.yaml (declares tools, policies, evidence types)
- constants.ts (pack id, version, shared strings)
Directorytools/
- my-tool.ts (tool descriptor: name, scopes, handler pointer)
- types.ts (shared type definitions)
Directorytool-impl/
- my-tool.ts (runtime implementation)
A pack has two layers:
- Manifest — a YAML file the kernel parses at load time to learn what tools, policies, and evidence types the pack provides.
- Tool implementations — TypeScript modules that execute when the kernel dispatches a tool call.
The kernel enforces a strict boundary between these layers and the rest of the codebase (see Import Boundaries below).
Step 1: Write the Manifest
Section titled “Step 1: Write the Manifest”The manifest is the contract between a pack and the kernel. Create
manifest.yaml at the root of your pack directory.
Minimal manifest
Section titled “Minimal manifest”Schema reference
Section titled “Schema reference”| Field | Type | Required | Description |
|---|---|---|---|
id | string | Yes | Unique pack identifier (kebab-case). |
version | string (semver) | Yes | Pack version. Must be valid semantic versioning. |
config_key | string | No | Root key in workspace.yaml for this pack’s configuration. See below. |
config_schema | string | No | Path to a Zod schema for validating pack config values. |
task_types | string[] | Yes | At least one task type the pack handles. |
tools | Tool[] | No | Tool definitions (see below). |
policies | Policy[] | No | Policy rules evaluated by the kernel. |
evidence_types | string[] | No | Evidence receipt kinds this pack can produce. |
state_aliases | Record<string, string> | No | Map friendly names to kernel state machine states. |
lane_templates | { id: string }[] | No | Pre-defined lane configurations. |
Pack configuration
Section titled “Pack configuration”Packs can declare a workspace-level configuration block by setting config_key in their manifest. When present, this key becomes a valid root-level field in workspace.yaml, and config:set routes writes to it through pack schema validation.
| Field | Description |
|---|---|
config_key | Declares which root key in workspace.yaml holds this pack’s config. Must be unique across all loaded packs. |
config_schema | Optional path (relative to pack root) to a Zod schema module that validates config values on write. |
When a pack declares config_key: software_delivery:
- The
software_deliveryblock inworkspace.yamlis treated as this pack’s configuration. config:set --key software_delivery.gates.minCoverage --value 85writes to that block, validated against the pack’s schema if provided.config:get --key software_delivery.methodology.testingreads from that block.- All keys must be fully qualified from the workspace root. Using
gates.minCoveragewithout thesoftware_deliveryprefix produces an error with a did-you-mean suggestion.
Tool entry schema
Section titled “Tool entry schema”Each object in the tools array describes one tool:
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Tool name, typically namespace:action. |
entry | string | Yes | Relative path to implementation, with optional #exportName. |
permission | string | Yes | One of read, write, or admin. |
required_scopes | Scope[] | Yes | Minimum scopes the tool needs (at least one). |
internal_only | boolean | No | If true, tool is hidden from external callers. |
Scope objects
Section titled “Scope objects”Scopes declare what access a tool requires. The kernel intersects these with workspace, lane, and task scopes at runtime — all four levels must agree before execution proceeds.
Network allowlist in tool manifests
Section titled “Network allowlist in tool manifests”Tools that need network access to specific services should declare posture: allowlist with their required destinations. This is safer than posture: full and allows workspace administrators to restrict which hosts are reachable.
If the workspace or lane declares a narrower allowlist, the kernel intersects the entries — only hosts present in all layers are permitted. If the workspace blocks network entirely (posture: off), the tool call is denied regardless of the tool’s declared allowlist.
Real example: software-delivery manifest (excerpt)
Section titled “Real example: software-delivery manifest (excerpt)”Here is how the software-delivery pack declares git:status:
The config_key: software_delivery declaration tells the kernel that this pack owns the software_delivery root key in workspace.yaml. Without this declaration, any software_delivery.* config operations would fail with an “unknown key” error.
The entry field points to the file tool-impl/git-tools.ts and the named
export gitStatusTool. The kernel resolves this path relative to the pack root
and verifies it stays within the pack boundary.
Step 2: Implement a Tool
Section titled “Step 2: Implement a Tool”Tools have two parts: a descriptor (registered with the kernel) and an implementation (the actual logic).
Tool descriptor
Section titled “Tool descriptor”The descriptor lives in tools/ and tells the kernel about the tool’s metadata.
The createToolDescriptor helper stamps the tool with the pack’s domain,
version, and pack id so the kernel can trace provenance:
Tool implementation
Section titled “Tool implementation”The implementation lives in tool-impl/ and contains the runtime logic. It
receives an input object and a cwd string, and returns a structured output:
Output contract
Section titled “Output contract”Every tool must return an object matching the kernel’s ToolOutput schema:
| Field | Type | Required | Description |
|---|---|---|---|
success | boolean | Yes | Whether the operation succeeded. |
data | unknown | No | Arbitrary result payload. |
error | object | No | { code: string; message: string } on failure. |
metadata | object | No | { artifacts_written?: string[] } for provenance. |
The metadata.artifacts_written array is important for the evidence layer —
it records what files the tool modified so evidence receipts can reference them.
Step 3: Understand Import Boundaries
Section titled “Step 3: Understand Import Boundaries”When the kernel loads a pack, it scans every runtime source file (.ts, .js,
.mts, .mjs, .cjs, .cts) and validates every import specifier against
these rules:
Allowed imports
Section titled “Allowed imports”| Import type | Example | Allowed? |
|---|---|---|
| Node built-ins | import fs from 'node:fs' | Yes |
| Node built-ins (bare) | import path from 'path' | Yes |
@lumenflow/kernel | import { z } from '@lumenflow/kernel' | Yes |
| Kernel sub-paths | import '@lumenflow/kernel/schemas' | Yes |
| Relative (within pack) | import { runGit } from './git-runner.js' | Yes |
| Explicitly allow-listed | import simpleGit from 'simple-git' | Yes |
Blocked imports
Section titled “Blocked imports”| Import type | Example | Blocked? |
|---|---|---|
Other @lumenflow/* | import { ... } from '@lumenflow/cli' | Yes |
| Arbitrary npm packages | import lodash from 'lodash' | Yes |
| Absolute paths | import '/home/user/file.ts' | Yes |
| Relative paths outside root | import '../../outside/file.js' | Yes |
Why these restrictions exist: Packs run inside the kernel’s sandbox. If a pack could import arbitrary packages or reach outside its root directory, it could bypass the sandbox’s access controls. The import boundary is an additional static-analysis guard on top of the runtime sandbox.
Tool entry path resolution
Section titled “Tool entry path resolution”The kernel resolves every tool’s entry field relative to the pack root and
rejects any path that escapes:
Step 4: Configure Policies
Section titled “Step 4: Configure Policies”Policies are rules the kernel evaluates at specific trigger points. Add them to
your manifest’s policies array.
Policy schema
Section titled “Policy schema”| Field | Type | Required | Description |
|---|---|---|---|
id | string | Yes | Unique policy identifier. |
trigger | string | Yes | One of the trigger types below. |
decision | string | Yes | allow or deny. |
reason | string | No | Human-readable justification. |
Policy triggers
Section titled “Policy triggers”| Trigger | When it fires |
|---|---|
on_tool_request | Before every tool call. |
on_claim | When a task is claimed by an agent. |
on_completion | When a task transitions to done. |
on_evidence_added | When an evidence receipt is appended. |
Deny-wins cascade
Section titled “Deny-wins cascade”Policies cascade through four levels: workspace > lane > pack > task. A
deny at any level cannot be overridden by a lower-level allow. This means
a workspace-level deny policy is absolute.
Real example: software-delivery gate policies
Section titled “Real example: software-delivery gate policies”The software-delivery pack declares five gate policies that run on task
completion:
These policies default to allow and are flipped to deny by the gate runner
if the corresponding check fails.
Step 5: Register the Pack with the Workspace
Section titled “Step 5: Register the Pack with the Workspace”For the kernel to load your pack, it must be pinned in the workspace specification.
Pack pin
Section titled “Pack pin”| Field | Type | Description |
|---|---|---|
id | string | Must match manifest.yaml id. |
version | string | Must match manifest.yaml version. |
integrity | string | dev (skip verification) or sha256:<64-hex>. |
source | string | Where the pack is loaded from. |
Install using the canonical command shape:
Registry installs use the same shape:
For registry installs, --integrity is optional. When omitted, the CLI resolves
the integrity value from registry metadata for the requested version.
Local source resolution order
Section titled “Local source resolution order”When source: local is used, the kernel resolves pack roots in this order:
workspaceRoot/packs/<pack-id>workspaceRoot/packages/@lumenflow/packs/<pack-id>(monorepo development)@lumenflow/cli/packs/<pack-id>bundled in npm CLI installs
This allows the same workspace.yaml pack pin to work in monorepo development
and in end-user npm installs without changing the pack contract.
Integrity verification
Section titled “Integrity verification”During development, set integrity: dev to skip hash verification. The kernel
logs a warning when it loads a pack with dev integrity, and rejects dev
integrity in production unless explicitly allowed.
For production, the kernel computes a deterministic SHA-256 hash of all pack
files (excluding node_modules/, .git/, dist/, .DS_Store) and compares
it to the pinned integrity value:
The hash is computed by:
- Listing all files in the pack directory (sorted, exclusions filtered).
- For each file: computing its SHA-256 hash.
- Concatenating
relativePath \0 fileHash \0for every file. - Computing a final SHA-256 over the concatenated buffer.
This ensures any change to any file in the pack changes the integrity hash. An agent cannot modify pack code without the hash mismatch being detected.
Step 6: Understand the Scope Intersection
Section titled “Step 6: Understand the Scope Intersection”When an agent calls a tool, the kernel computes the scope intersection across four levels:
If the intersection is empty for any required scope type, the tool call is denied. This means:
- A workspace can restrict all tools to specific directories.
- A lane can narrow that further (e.g., only
packages/@app/core/**). - A task declares what scopes it needs.
- The tool declares what scopes it requires.
All four must overlap for execution to proceed.
Example: narrowing git:status
Section titled “Example: narrowing git:status”Putting It All Together
Section titled “Putting It All Together”Here is the complete file structure of a minimal pack with one tool:
Directorymy-pack/
- manifest.yaml
- constants.ts
Directorytools/
- types.ts
- hello-tool.ts
Directorytool-impl/
- hello-tool.ts
manifest.yaml
Section titled “manifest.yaml”constants.ts
Section titled “constants.ts”tools/types.ts
Section titled “tools/types.ts”tools/hello-tool.ts
Section titled “tools/hello-tool.ts”tool-impl/hello-tool.ts
Section titled “tool-impl/hello-tool.ts”Workspace pin
Section titled “Workspace pin”Checklist
Section titled “Checklist”Before shipping your pack, verify:
-
manifest.yamlpasses schema validation (id,version,task_typespresent) - Every tool
entryresolves to an existing file within the pack root - All imports stay within allowed boundaries
-
required_scopesare as narrow as possible - Policies use correct trigger types
- Pack is pinned in workspace spec with correct
idandversion - Integrity is set to
dev(development) or validsha256:hash (production) -
pnpm pack:validatepasses with no security lint violations - Pack installs via
pnpm pack:install --id <packId> --source <source> --version <version> - Tool implementations return the
ToolOutputcontract (success, optionaldata/error/metadata)
Next Steps
Section titled “Next Steps”- Configuration Reference — Workspace spec and pack pin options
- Gates — How gate policies work end-to-end
- Constraints — The rules packs must follow