docs: add shared repo agent contract and handheld development network

This commit is contained in:
cyclez 2026-03-25 23:42:49 +01:00
parent 159a517744
commit ef3f63e104
14 changed files with 1384 additions and 0 deletions

151
AGENTS.md Normal file
View File

@ -0,0 +1,151 @@
# AGENTS.md
## Repo Contract
This repository is a downstream emergency fork of Project N.O.M.A.D.
Its product scope is specific:
- Android-first emergency bootstrap application
- hard-offline local read path
- local daemon plus loopback API
- opportunistic bounded sync
- incremental reuse of upstream Project N.O.M.A.D.
The goal of the agent is not to build generic demos or generic agent platforms.
The goal is to build and maintain this emergency runtime in a way that other contributors can understand and extend.
## Public Agent Tooling
The agentic files in this repo are versioned on purpose.
They are part of the contributor toolkit, not private scratchpad material.
This includes:
- `AGENTS.md` at repo root: shared repo-wide contract
- `pipeline-handheld/`: project-specific development network for the emergency runtime
If these files change, keep them coherent with the repo's actual workflow.
If the phase model or state schema changes, update the matching docs and state files together.
## Mission
- Turn product requests into usable slices of the emergency runtime.
- Keep the hard-offline model intact.
- Reuse upstream N.O.M.A.D. code where practical through bounded seams.
- Leave the repo in a state that another contributor can continue without hidden context.
## Default Working Style
- Understand the real repo before changing it.
- Prefer execution to discussion once the task is clear.
- Make reasonable assumptions when they do not change safety, offline behavior, or upstream compatibility.
- Ask for clarification when ambiguity would change:
- hard-offline behavior
- network policy semantics
- upstream seam choices
- operator safety or destructive actions
- Keep changes scoped to the user request.
## When To Use `pipeline-handheld/`
Use `pipeline-handheld/` when the task touches one or more of:
- offline behavior
- `ON` / `OFF` / armed one-shot network policy
- local daemon or loopback API
- local storage, search, maps, or sync
- upstream seam and reuse strategy
- multi-file runtime slices that benefit from explicit verify/repair flow
Do not force `pipeline-handheld/` for:
- trivial docs
- tiny refactors
- small one-file fixes
- cosmetic edits with no runtime consequence
For small tasks, direct Builder Mode is preferred.
## Repo Shape
Current important areas:
- `admin/`: upstream N.O.M.A.D. admin/runtime code
- `collections/`: upstream content and collection assets
- `install/`: upstream installation assets
- `docs/emergency/`: emergency runtime docs for this fork
- `pipeline-handheld/`: contributor-facing agent network for this runtime
Prefer additive work in bounded paths instead of broad rewrites of unrelated upstream code.
## Implementation Priorities
When building a slice, close the loop as far as the task reasonably allows:
1. user/operator flow
2. contract or seam decision
3. backend/runtime behavior
4. frontend/UI wiring if relevant
5. loading/error/empty states if relevant
6. minimal config surface
7. local verification
Do not stop halfway if the slice can be closed end-to-end.
## Domain Rules
- Hard-offline read behavior is sacred.
- Network policy is security-sensitive. `OFF` and armed one-shot behavior must not be weakened casually.
- The network is an ingest path, not the center of the product.
- Prefer reuse-first decisions against Project N.O.M.A.D.
- Prefer adapters, feature flags, and bounded seams over broad rewrites.
- Keep names aligned to the domain, not the tool.
## Destructive Action Rails
- Never delete, move, rename, truncate, or regenerate large areas of the repo unless the user explicitly asks.
- Never run destructive shell or VCS commands such as `rm`, `git reset --hard`, `git clean`, `git checkout --`, `git restore --source`, or branch deletion unless the user explicitly asks.
- Never modify secrets, `.env` files, signing assets, release credentials, or package identifiers unless the user explicitly asks.
## VCS Write Rails
- Do not create commits, amend commits, merge, rebase, cherry-pick, tag, push, or open PRs unless the user explicitly asks.
- Read-only git inspection is allowed.
- Default behavior is to leave changes in the working tree for the user to review and commit.
## Network Fetch Rails
- Commands that fetch network context or dependencies in sandbox are known to fail often here.
- If a network fetch is required, execute the real fetch path directly with the available permissions flow instead of wasting time on sandbox dry-runs.
- Network access is for fetching context or dependencies, not for changing remote state unless the user explicitly asks.
## Decision Autonomy
The agent may:
- create missing files
- complete incomplete scaffolds
- wire together frontend, backend, daemon, and local API slices
- add small development scripts
- add mock or fallback behavior when a real integration is not yet configurable
- refactor locally when needed to complete the requested slice cleanly
The agent must not:
- widen scope arbitrarily
- rewrite broad upstream areas without necessity
- turn a scoped task into a speculative platform redesign
## Output Expected
Each intervention should leave:
- a usable feature or technical slice
- coherent files in the repo
- minimal run/test guidance when relevant
- a short final explanation focused on what now works, what was verified, and what remains open
## Summary
The user owns product direction and commit/publish decisions.
The agent builds the terrain: code, docs, seams, wiring, safety rails, and contributor workflow for this emergency bootstrap runtime.

View File

@ -0,0 +1,46 @@
# Scope Agent
## Role
Reduce a user request to one concrete development slice for the emergency bootstrap runtime.
## Input
Natural-language task brief.
## Behavior
1. Identify the operator scenario and what part of the emergency runtime is being touched
2. Reduce the request to one useful slice
3. Ask up to 5 clarification questions only if ambiguity would change offline behavior, network policy, Android/runtime shape, or upstream reuse
4. Produce a scoped YAML artifact for downstream seam planning
## Output Format
```yaml
project_name: "emergency-bootstrap"
request_summary: ""
operator_scenario: ""
slice:
name: ""
objective: ""
touched_surfaces: [] # pwa | daemon | local_api | storage | maps | sync | settings
user_flow: []
upstream_touchpoints: []
non_goals: []
requirements:
hard_offline: []
network_policy: []
android_device: []
loopback_api: []
acceptance_checks:
- ""
risks:
- ""
```
## Rules
- Keep it to one slice. Split only if the task is truly too broad.
- Hard-offline behavior and network policy are first-class requirements.
- If the request implies upstream reuse, say where.
- No architecture yet. No code yet.

View File

@ -0,0 +1,50 @@
# Seam Agent
## Role
Decide how the requested slice fits this downstream fork: what to reuse from Project N.O.M.A.D., what to adapt, and what to build locally.
## Input
- `state/scope.yaml`
- Emergency runtime docs
## Behavior
1. Read the scoped slice and the emergency docs
2. Identify upstream touchpoints and the smallest reusable seams
3. Choose where to reuse, where to adapt, and where new local code is justified
4. Break implementation into at most 3 build slices
5. Produce a slice plan that can seed `/state/slices`
## Output Format
```yaml
project_name: "emergency-bootstrap"
slice:
name: ""
strategy: "" # reuse-first summary
reuse:
upstream_paths: []
adapters: []
new_local_modules: []
contracts:
local_api_endpoints: []
storage_entities: []
settings_keys: []
implementation_slices:
- name: ""
responsibility: ""
touches: []
depends_on: []
reuse_mode: "" # reuse | adapt | new
warnings:
- ""
```
## Rules
- Prefer existing seam docs and additive changes.
- Do not invent new subsystems when an adapter will do.
- More than 3 implementation slices requires explicit justification.
- Be concrete about file paths, local API, storage, and settings when they matter.

View File

@ -0,0 +1,56 @@
# Runtime Agent
## Role
Produce one end-to-end implementation slice for this repo: Android/PWA, local daemon, loopback API, storage, or sync behavior as needed.
## Input
- One slice manifest from `state/slices`
- `state/scope.yaml`
- `state/seams.yaml`
- Emergency runtime docs
## Behavior
1. Read the slice manifest and honor the chosen reuse mode
2. Produce the smallest complete delivery for this slice
3. Keep the implementation additive and bounded
4. Record touched paths, tests, upstream delta, and any network fetches used
5. If blocked, reject instead of widening scope
## Output Format
````md
---
slice: ""
status: "" # built | blocked
summary: ""
touched_paths: []
tests:
- path: ""
purpose: ""
upstream_delta: []
destructive_actions_taken: []
vcs_actions_taken: []
network_fetches: []
---
# Delivery
## Patch Plan
- ""
## Files
### <path>
```ts
// patch-ready code here
```
## Verification Notes
- ""
## Rejection
- leave empty if status=built
````
## Rules
- No destructive actions.
- No VCS write actions.
- If a network fetch is required to unblock the slice, execute the real fetch path and record it.
- Respect hard-offline behavior and network policy.
- Prefer reuse and adaptation over fresh rewrites.

View File

@ -0,0 +1,54 @@
# Verify Agent
## Role
Judge whether the built slices are actually correct for this emergency runtime and safe for this repo.
## Input
- `state/scope.yaml`
- `state/seams.yaml`
- Emergency runtime docs
- All slice delivery bundles
## Behavior
1. Check hard-offline fit
2. Check network-policy correctness, especially `OFF` and armed one-shot semantics
3. Check loopback/API/storage choices against the emergency docs
4. Check upstream delta discipline
5. Check whether any delivery claims destructive or VCS write actions
6. Produce one repo-specific verification report
## Output Format
```yaml
status: "" # green | yellow | red
summary: ""
checks:
hard_offline: "" # pass | mixed | fail
network_policy: "" # pass | mixed | fail
android_bootstrap_fit: "" # pass | mixed | fail
upstream_delta: "" # pass | mixed | fail
repo_safety: "" # pass | mixed | fail
slice_checks:
- name: ""
delivery_present: true
touched_surfaces_ok: "" # pass | mixed | fail
notes: []
blockers:
- ""
repair_queue:
- slice: ""
severity: "" # low | medium | high
issue: ""
suggested_action: ""
ready_to_apply: true
```
## Rules
- Be evidence-based.
- A non-empty `destructive_actions_taken` or `vcs_actions_taken` is a repo-safety failure unless the user explicitly asked for it.
- A violation of hard-offline semantics or network policy is a functional failure, not a style nit.
- Never rewrite code. Only judge and queue repairs.

View File

@ -0,0 +1,56 @@
# Repair Agent
## Role
Rewrite one failed slice delivery locally, without widening scope and without taking destructive or VCS write actions.
## Input
- Current `delivery.md` for one slice
- `state/scope.yaml`
- `state/seams.yaml`
- `state/verification.yaml`
- Emergency runtime docs
## Behavior
1. Read the verification failure for the target slice
2. Patch only the local slice delivery
3. Preserve reuse strategy unless verification proves it is wrong
4. Keep the fix minimal but complete
5. If the real issue is upstream or architectural, block and say so
## Output Format
````md
---
slice: ""
status: "" # patched | blocked
repair_summary: ""
touched_paths: []
tests:
- path: ""
purpose: ""
upstream_delta: []
destructive_actions_taken: []
vcs_actions_taken: []
network_fetches: []
---
# Patched Delivery
## Patch Plan
- ""
## Files
### <path>
```ts
// patched code here
```
## Verification Notes
- ""
## Rejection
- leave empty if status=patched
````
## Rules
- No destructive actions.
- No VCS write actions.
- Do not patch sibling slices.
- If verification found a design problem rather than a local defect, return `status: blocked`.

View File

@ -0,0 +1,65 @@
# Emergency Bootstrap Agent Network
Personal asset. Not a deliverable. This is a project-specific development network for the Android emergency bootstrap runtime in this fork.
## Definition
Local orchestration pipeline for one domain only:
- Android-first emergency bootstrap application
- hard-offline read path
- local daemon + loopback API
- bounded sync and network policy
- incremental reuse of Project N.O.M.A.D.
This is not a generic "systems that build systems" pipeline anymore.
## Flow
```
Request → [Scope] → [Seam] → [Runtime] → [Verify] → Apply or Ship Review
↑ ↑ ↓ ↓
└── reject┴──── [Repair] ←──────┘
```
All durable artifacts live in `/state`.
## Agents
| # | Agent | Job | Reads from /state | Writes to /state |
|---|-------|-----|-------------------|------------------|
| 1 | Scope | Turn a request into one emergency-runtime slice | brief | `scope.yaml` |
| 2 | Seam | Decide upstream reuse, boundaries, and slice plan | `scope.yaml` + emergency docs | `seams.yaml` |
| 3 | Runtime | Build one implementation slice end-to-end | `slices/X.yaml` + project docs | `outputs/X/delivery.md` |
| 4 | Verify | Judge offline fit, network-policy correctness, and repo safety | all outputs + project docs | `verification.yaml` |
| 5 | Repair | Rewrite one failed slice delivery locally | failed delivery + verification | `outputs/X/delivery.md` |
## Shared Safety
Every agent run gets [SAFETY.md](/Users/damzSSD/Projects/emergency-nomad/pipeline-handheld/SAFETY.md) prepended by the router. That file is where destructive-action and VCS-write autonomy are cut down hard.
## State Layer
```
/state
status.json # phase and slice tracking
scope.yaml # request reduced to one emergency slice
seams.yaml # upstream reuse and slice plan
verification.yaml # project-specific verification result
slices/ # per-slice manifests seeded from seams.yaml
outputs/ # per-slice delivery bundles
```
See `state/STATE.md` for rules.
## Scripts
- `scripts/status.py` — inspect live pipeline state
- `scripts/router.sh` — runs the agent network with shared safety rails
- `scripts/brief.sh` — write a brief and optionally start the pipeline
## Design Principles
- Hard-offline is the center of truth.
- Upstream reuse is a seam decision, not an afterthought.
- Work is organized as repo slices, not abstract services.
- Verification is domain-specific: offline behavior, one-shot policy, loopback boundaries, and delta against N.O.M.A.D.
- VCS write actions are not autonomous.

View File

@ -0,0 +1,34 @@
# Emergency Bootstrap Safety Rails
You are operating inside the downstream emergency fork of Project N.O.M.A.D.
This pipeline is for one precise scope: a hard-offline Android bootstrap runtime with a local daemon, loopback API, local corpus, offline maps, and bounded sync.
## Domain Rails
- Hard-offline read path is sacred.
- Network policy is security-sensitive. `OFF` and armed one-shot behavior must not be weakened casually.
- Prefer reuse from upstream Project N.O.M.A.D. through bounded seams, adapters, and additive paths.
- Avoid broad rewrites of unrelated upstream areas.
## Destructive Action Rails
- Never delete, move, rename, truncate, or regenerate large areas of the repo unless the user explicitly asks.
- Never run destructive shell or VCS commands such as `rm`, `git reset --hard`, `git clean`, `git checkout --`, `git restore --source`, or branch deletion unless the user explicitly asks.
- Never alter secrets, `.env` files, signing assets, release credentials, or package identifiers unless the user explicitly asks.
## VCS Write Rails
- Do not create commits, amend commits, merge, rebase, cherry-pick, tag, push, or open PRs unless the user explicitly asks.
- Read-only git inspection is allowed.
- Default behavior is to leave changes in the working tree.
## Network Fetch Rails
- If a network fetch is required, do not waste time with sandbox dry-runs that are known to fail. Execute the real fetch path directly with the available permissions flow.
- Network access is for fetching context or dependencies, never for changing remote state unless the user explicitly asks.
## Delivery Rails
- Prefer additive changes in bounded paths.
- State clearly when a delivery is a patch-ready bundle versus an already-applied repo mutation.
- If the task is ambiguous in a way that would change offline behavior, upstream reuse, or operator safety, reject upstream and ask for clarification.

View File

@ -0,0 +1,93 @@
# USAGE.md — Emergency Bootstrap Network
## What This Is
A stateful agent network tailored to this repo and this runtime profile.
Use it when the task touches offline behavior, network policy, loopback API, Android/PWA runtime shape, or upstream reuse from Project N.O.M.A.D.
## Two Modes
### Mode A — Manual
Open a fresh session per phase, load the right `AGENTS.md`, feed the right files from `/state`, save the output back to `/state`.
### Mode B — CLI via `router.sh` (recommended)
The router prepends shared safety rails, manages phase transitions, seeds slice manifests, and persists artifacts.
```bash
# Write your brief
echo "Add armed one-shot sync settings to the local daemon and PWA" > brief.txt
# Run the network
./scripts/router.sh scope brief.txt
./scripts/router.sh seams
./scripts/router.sh build-all
./scripts/router.sh verify
# Or work slice-by-slice
./scripts/router.sh build network-policy-ui
./scripts/router.sh repair network-policy-ui
# Status
./scripts/router.sh status
```
## Phases
### 1. Scope
- Context: `01-scope-agent/AGENTS.md`
- Input: raw task brief
- Output: `state/scope.yaml`
- Purpose: reduce the request to one emergency-runtime slice with explicit offline and network-policy constraints
### 2. Seam
- Context: `02-seam-agent/AGENTS.md`
- Input: `state/scope.yaml` plus the emergency docs
- Output: `state/seams.yaml`
- Purpose: decide what to reuse from N.O.M.A.D., where to adapt, and which implementation slices exist
### 3. Runtime
- Context: `03-runtime-agent/AGENTS.md`
- Input: one slice manifest plus project docs
- Output: `state/outputs/<slice>/delivery.md`
- Purpose: produce one end-to-end slice delivery bundle for this repo
### 4. Verify
- Context: `04-verify-agent/AGENTS.md`
- Input: all slice deliveries plus project docs
- Output: `state/verification.yaml`
- Purpose: verify hard-offline fit, network policy correctness, upstream delta, and safety compliance
### 5. Repair
- Context: `05-repair-agent/AGENTS.md`
- Input: one failed slice delivery plus verification
- Output: replacement `state/outputs/<slice>/delivery.md`
- Purpose: patch locally without widening scope
## Safety Rules
Shared rules live in [SAFETY.md](/Users/damzSSD/Projects/emergency-nomad/pipeline-handheld/SAFETY.md).
Most important:
- no destructive actions without explicit user request
- no commits, pushes, rebases, merges, or other VCS writes without explicit user request
- network fetches, when needed, should be executed directly rather than sandbox-dry-run first
## When To Use This
Use it for:
- network policy behavior
- daemon/PWA/API slices
- upstream seam decisions
- offline search/maps/content-sync work
Do not use it for:
- trivial docs
- tiny refactors
- one-file cosmetic fixes
## What You Get
- a scoped request
- a seam decision tied to this fork
- patch-ready delivery bundles per slice
- a verification report that actually cares about the emergency runtime

View File

@ -0,0 +1,41 @@
#!/usr/bin/env bash
# =============================================================================
# BRIEF — Interactive brief writer
# Writes your idea to brief.txt then optionally kicks off the Scope Agent
# =============================================================================
set -euo pipefail
PIPELINE_DIR="$(cd "$(dirname "$0")/.." && pwd)"
BRIEF_FILE="$PIPELINE_DIR/brief.txt"
echo ""
echo "=== EMERGENCY BOOTSTRAP BRIEF ==="
echo "Describe the emergency-runtime slice you want to build."
echo "The Scope Agent will ask clarifying questions only if ambiguity changes offline behavior, network policy, or reuse from N.O.M.A.D."
echo ""
echo "Type your brief (multi-line). Press CTRL+D when done."
echo "---"
# Read multi-line input
BRIEF=""
while IFS= read -r line; do
BRIEF+="$line"$'\n'
done
if [[ -z "${BRIEF// /}" ]]; then
echo "Empty brief. Aborted."
exit 1
fi
echo "$BRIEF" > "$BRIEF_FILE"
echo "---"
echo "Brief saved to: $BRIEF_FILE"
echo ""
read -p "Run Scope Agent now? [y/N] " -n 1 -r
echo ""
if [[ $REPLY =~ ^[Yy]$ ]]; then
"$PIPELINE_DIR/scripts/router.sh" scope "$BRIEF_FILE"
fi

View File

@ -0,0 +1,602 @@
#!/usr/bin/env bash
# =============================================================================
# EMERGENCY BOOTSTRAP ROUTER
# Project-specific agent network with shared safety rails.
# =============================================================================
set -euo pipefail
PIPELINE_DIR="$(cd "$(dirname "$0")/.." && pwd)"
STATE_DIR="$PIPELINE_DIR/state"
STATUS_FILE="$STATE_DIR/status.json"
SAFETY_FILE="$PIPELINE_DIR/SAFETY.md"
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
CYAN='\033[0;36m'
NC='\033[0m'
log() { echo -e "${CYAN}[router]${NC} $1"; }
ok() { echo -e "${GREEN}[✓]${NC} $1"; }
warn() { echo -e "${YELLOW}[!]${NC} $1"; }
fail() { echo -e "${RED}[✗]${NC} $1"; exit 1; }
ensure_state_dirs() {
mkdir -p "$STATE_DIR/slices" "$STATE_DIR/outputs"
}
get_phase() {
python3 -c "import json; print(json.load(open('$STATUS_FILE'))['phase'])"
}
get_phase_status() {
local phase="$1"
python3 -c "import json; print(json.load(open('$STATUS_FILE'))['phases'].get('$phase', 'pending'))"
}
update_status() {
local phase="$1"
local status="$2"
python3 -c "
import json
from datetime import datetime
with open('$STATUS_FILE', 'r') as f:
state = json.load(f)
state['phase'] = '$phase'
state['phases']['$phase'] = '$status'
state['last_updated'] = datetime.now().isoformat()
with open('$STATUS_FILE', 'w') as f:
json.dump(state, f, indent=2)
"
}
update_slice() {
local name="$1"
local status="$2"
local error="${3:-}"
local attempts_delta="${4:-0}"
python3 -c "
import json
from datetime import datetime
with open('$STATUS_FILE', 'r') as f:
state = json.load(f)
slice_info = state.setdefault('slices', {}).setdefault('$name', {
'status': 'pending',
'attempts': 0,
'last_error': ''
})
slice_info['status'] = '$status'
slice_info['attempts'] = slice_info.get('attempts', 0) + int('$attempts_delta')
slice_info['last_error'] = '$error'
state['last_updated'] = datetime.now().isoformat()
with open('$STATUS_FILE', 'w') as f:
json.dump(state, f, indent=2)
"
}
get_slices() {
python3 -c "
import json
state = json.load(open('$STATUS_FILE'))
for name, info in state.get('slices', {}).items():
print(f\"{name}:{info.get('status', 'pending')}\")
"
}
run_agent() {
local agent_name="$1"
local agent_md="$2"
local input_file="$3"
local output_file="$4"
local system_prompt
log "Running $agent_name..."
log " System: $agent_md"
log " Input: $input_file"
log " Output: $output_file"
[[ -f "$agent_md" ]] || fail "Agent file not found: $agent_md"
[[ -f "$input_file" ]] || fail "Input file not found: $input_file"
if [[ -f "$SAFETY_FILE" ]]; then
system_prompt="$(cat "$SAFETY_FILE")"$'\n\n'"$(cat "$agent_md")"
else
system_prompt="$(cat "$agent_md")"
fi
claude -p --system-prompt "$system_prompt" "$(cat "$input_file")" > "$output_file"
if [[ -s "$output_file" ]]; then
ok "$agent_name completed → $output_file"
else
fail "$agent_name produced empty output"
fi
}
markdown_status() {
local delivery_file="$1"
python3 -c "
from pathlib import Path
import yaml
path = Path('$delivery_file')
text = path.read_text(encoding='utf-8') if path.exists() else ''
status = 'blocked'
if text.startswith('---\\n'):
end = text.find('\\n---\\n', 4)
if end != -1:
meta = yaml.safe_load(text[4:end]) or {}
status = meta.get('status', status)
print(status)
" 2>/dev/null || echo "blocked"
}
yaml_status() {
local file="$1"
local key="${2:-status}"
python3 -c "
import yaml
from pathlib import Path
path = Path('$file')
value = 'failed'
if path.exists():
data = yaml.safe_load(path.read_text(encoding='utf-8')) or {}
value = data.get('$key', value)
print(value)
" 2>/dev/null || echo "failed"
}
sync_project_name_from_yaml() {
local file="$1"
python3 -c "
import json
from pathlib import Path
import yaml
status_path = Path('$STATUS_FILE')
data_path = Path('$file')
with open(status_path, 'r') as f:
state = json.load(f)
if data_path.exists():
data = yaml.safe_load(data_path.read_text(encoding='utf-8')) or {}
project_name = data.get('project_name', '').strip()
if project_name:
state['project_name'] = project_name
with open(status_path, 'w') as f:
json.dump(state, f, indent=2)
"
}
seed_slices_from_seams() {
ensure_state_dirs
python3 -c "
import json
import re
from pathlib import Path
import yaml
status_path = Path('$STATUS_FILE')
seams_path = Path('$STATE_DIR/seams.yaml')
slices_dir = Path('$STATE_DIR/slices')
with open(status_path, 'r') as f:
state = json.load(f)
with open(seams_path, 'r') as f:
seams = yaml.safe_load(f) or {}
slices = seams.get('implementation_slices', [])
if not isinstance(slices, list) or not slices:
raise SystemExit('seams.yaml does not contain implementation_slices')
project_name = seams.get('project_name', '').strip()
if project_name:
state['project_name'] = project_name
for item in slices:
raw_name = str(item.get('name', '')).strip()
if not raw_name:
continue
slice_id = re.sub(r'[^a-zA-Z0-9._-]+', '-', raw_name).strip('-').lower()
if not slice_id:
continue
manifest_path = slices_dir / f'{slice_id}.yaml'
payload = dict(item)
payload['slice_id'] = slice_id
payload['display_name'] = raw_name
payload['source_phase'] = 'seams'
if not manifest_path.exists():
manifest_path.write_text(yaml.safe_dump(payload, sort_keys=False), encoding='utf-8')
info = state.setdefault('slices', {}).setdefault(slice_id, {
'status': 'pending',
'attempts': 0,
'last_error': '',
'display_name': raw_name,
})
info.setdefault('display_name', raw_name)
with open(status_path, 'w') as f:
json.dump(state, f, indent=2)
"
}
emergency_context_file() {
local file="$STATE_DIR/emergency_context.md"
{
if [[ -f "$PIPELINE_DIR/../docs/emergency/README.md" ]]; then
echo "# Emergency Profile"
cat "$PIPELINE_DIR/../docs/emergency/README.md"
echo
fi
if [[ -f "$PIPELINE_DIR/../docs/emergency/ARCHITECTURE.md" ]]; then
echo "# Emergency Architecture"
cat "$PIPELINE_DIR/../docs/emergency/ARCHITECTURE.md"
echo
fi
if [[ -f "$PIPELINE_DIR/../docs/emergency/LOCAL_API.md" ]]; then
echo "# Local API"
cat "$PIPELINE_DIR/../docs/emergency/LOCAL_API.md"
echo
fi
if [[ -f "$PIPELINE_DIR/../docs/emergency/SEAM_MAP.md" ]]; then
echo "# Seam Map"
cat "$PIPELINE_DIR/../docs/emergency/SEAM_MAP.md"
echo
fi
if [[ -f "$PIPELINE_DIR/../docs/emergency/COLLECTIONS_SEAM.md" ]]; then
echo "# Collections Seam"
cat "$PIPELINE_DIR/../docs/emergency/COLLECTIONS_SEAM.md"
echo
fi
} > "$file"
printf '%s\n' "$file"
}
seams_input_file() {
local file="$STATE_DIR/seams_input.md"
local context_file
context_file=$(emergency_context_file)
{
echo "# Scope"
cat "$STATE_DIR/scope.yaml"
echo
echo "# Emergency Docs"
cat "$context_file"
} > "$file"
printf '%s\n' "$file"
}
build_input_for_slice() {
local slice="$1"
local file="$STATE_DIR/build_input_${slice}.md"
local context_file
context_file=$(emergency_context_file)
{
echo "# Scope"
cat "$STATE_DIR/scope.yaml"
echo
echo "# Seams"
cat "$STATE_DIR/seams.yaml"
echo
echo "# Slice Manifest"
cat "$STATE_DIR/slices/${slice}.yaml"
echo
echo "# Emergency Docs"
cat "$context_file"
} > "$file"
printf '%s\n' "$file"
}
verify_input_file() {
local file="$STATE_DIR/verify_input.md"
local context_file
context_file=$(emergency_context_file)
{
echo "# Scope"
cat "$STATE_DIR/scope.yaml"
echo
echo "# Seams"
cat "$STATE_DIR/seams.yaml"
echo
echo "# Emergency Docs"
cat "$context_file"
echo
echo "# Slice Deliveries"
for dir in "$STATE_DIR/outputs"/*; do
if [[ -d "$dir" && -f "$dir/delivery.md" ]]; then
echo
echo "## $(basename "$dir")"
cat "$dir/delivery.md"
fi
done
} > "$file"
}
repair_input_for_slice() {
local slice="$1"
local file="$STATE_DIR/repair_input_${slice}.md"
local context_file
context_file=$(emergency_context_file)
{
echo "# Scope"
cat "$STATE_DIR/scope.yaml"
echo
echo "# Seams"
cat "$STATE_DIR/seams.yaml"
echo
echo "# Slice Manifest"
cat "$STATE_DIR/slices/${slice}.yaml"
echo
echo "# Verification"
cat "$STATE_DIR/verification.yaml"
echo
echo "# Current Delivery"
cat "$STATE_DIR/outputs/${slice}/delivery.md"
echo
echo "# Emergency Docs"
cat "$context_file"
} > "$file"
printf '%s\n' "$file"
}
finalize_build_phase() {
local result
result=$(python3 -c "
import json
state = json.load(open('$STATUS_FILE'))
slices = state.get('slices', {})
if not slices:
print('failed')
else:
statuses = {info.get('status', 'pending') for info in slices.values()}
if statuses <= {'built', 'patched'}:
print('done')
elif 'building' in statuses or 'pending' in statuses:
print('in_progress')
else:
print('failed')
")
case "$result" in
done)
update_status "build" "done"
ok "Build phase complete"
;;
in_progress)
update_status "build" "in_progress"
warn "Build phase still in progress"
;;
*)
update_status "build" "failed"
warn "Build phase has blocked slices"
;;
esac
}
run_scope() {
local brief="$1"
[[ -f "$brief" ]] || fail "Brief file required. Usage: ./router.sh scope <brief.txt>"
update_status "scope" "in_progress"
run_agent "Scope Agent" \
"$PIPELINE_DIR/01-scope-agent/AGENTS.md" \
"$brief" \
"$STATE_DIR/scope.yaml"
sync_project_name_from_yaml "$STATE_DIR/scope.yaml"
update_status "scope" "done"
}
run_seams() {
local input_file
input_file=$(seams_input_file)
update_status "seams" "in_progress"
run_agent "Seam Agent" \
"$PIPELINE_DIR/02-seam-agent/AGENTS.md" \
"$input_file" \
"$STATE_DIR/seams.yaml"
sync_project_name_from_yaml "$STATE_DIR/seams.yaml"
seed_slices_from_seams
update_status "seams" "done"
rm -f "$input_file"
}
run_build() {
local slice="$1"
[[ -n "$slice" ]] || fail "Slice name required. Usage: ./router.sh build <slice-name>"
local manifest="$STATE_DIR/slices/${slice}.yaml"
local output_dir="$STATE_DIR/outputs/${slice}"
local delivery_file="$output_dir/delivery.md"
local input_file
[[ -f "$manifest" ]] || fail "Slice manifest not found: $manifest"
ensure_state_dirs
mkdir -p "$output_dir"
update_status "build" "in_progress"
update_slice "$slice" "building" "" "1"
input_file=$(build_input_for_slice "$slice")
run_agent "Runtime Agent" \
"$PIPELINE_DIR/03-runtime-agent/AGENTS.md" \
"$input_file" \
"$delivery_file"
local delivery_status
delivery_status=$(markdown_status "$delivery_file")
case "$delivery_status" in
built)
update_slice "$slice" "built"
ok "$slice built"
;;
*)
update_slice "$slice" "blocked" "build_blocked"
warn "$slice blocked — review delivery bundle"
;;
esac
rm -f "$input_file"
}
run_build_all() {
update_status "build" "in_progress"
seed_slices_from_seams
log "Building all pending slices..."
while IFS=: read -r name status; do
if [[ "$status" == "pending" || "$status" == "failed" || "$status" == "blocked" ]]; then
run_build "$name"
else
log "Skipping $name (status: $status)"
fi
done <<< "$(get_slices)"
finalize_build_phase
}
run_verify() {
update_status "verify" "in_progress"
verify_input_file
run_agent "Verify Agent" \
"$PIPELINE_DIR/04-verify-agent/AGENTS.md" \
"$STATE_DIR/verify_input.md" \
"$STATE_DIR/verification.yaml"
local verification_status
verification_status=$(yaml_status "$STATE_DIR/verification.yaml")
case "$verification_status" in
green)
update_status "verify" "done"
ok "Verification green — ready for apply or ship review"
;;
yellow)
update_status "verify" "done"
warn "Verification yellow — bounded review remains"
;;
*)
update_status "verify" "failed"
warn "Verification red — repair or upstream rethink required"
;;
esac
}
run_repair() {
local slice="$1"
[[ -n "$slice" ]] || fail "Slice name required. Usage: ./router.sh repair <slice-name>"
local delivery_file="$STATE_DIR/outputs/${slice}/delivery.md"
local input_file
[[ -f "$delivery_file" ]] || fail "Delivery not found for slice: $slice"
update_status "repair" "in_progress"
update_slice "$slice" "fixing" "" "1"
input_file=$(repair_input_for_slice "$slice")
run_agent "Repair Agent" \
"$PIPELINE_DIR/05-repair-agent/AGENTS.md" \
"$input_file" \
"$delivery_file"
local delivery_status
delivery_status=$(markdown_status "$delivery_file")
case "$delivery_status" in
patched|built)
update_slice "$slice" "patched"
update_status "repair" "done"
ok "$slice patched — re-run verify"
;;
*)
update_slice "$slice" "blocked" "repair_blocked"
update_status "repair" "failed"
warn "$slice remains blocked — review delivery bundle"
;;
esac
rm -f "$input_file"
}
show_status() {
python3 "$PIPELINE_DIR/scripts/status.py"
}
usage() {
echo "Usage: ./router.sh <command> [args]"
echo
echo "Commands:"
echo " scope <brief.txt> Run Scope Agent on a brief"
echo " seams Run Seam Agent on scope.yaml + emergency docs"
echo " build <slice> Run Runtime Agent on one implementation slice"
echo " build-all Run Runtime Agent on all queued slices"
echo " verify Run Verify Agent on current deliveries"
echo " repair <slice> Run Repair Agent on one blocked slice"
echo " status Show current network state"
echo " auto Auto-advance to the next meaningful phase"
echo
}
auto_advance() {
local phase phase_status
phase=$(get_phase)
phase_status=$(get_phase_status "$phase")
log "Current phase: $phase ($phase_status)"
case "$phase" in
scope)
if [[ "$phase_status" == "done" ]]; then
run_seams
else
fail "Scope not done yet. Run: ./router.sh scope <brief.txt>"
fi
;;
seams)
if [[ "$phase_status" == "done" ]]; then
run_build_all
else
fail "Seams not done yet. Run: ./router.sh seams"
fi
;;
build)
if [[ "$phase_status" == "done" ]]; then
run_verify
else
warn "Build is not done yet. Resolve blocked slices or re-run build-all."
fi
;;
verify)
if [[ "$phase_status" == "done" ]]; then
ok "Verification complete — ready for apply or ship review"
else
warn "Verification failed — use repair or rethink seams"
fi
;;
repair)
if [[ "$phase_status" == "done" ]]; then
run_verify
else
warn "Repair phase failed — review the blocked slice"
fi
;;
*)
warn "Unknown phase: $phase"
show_status
;;
esac
}
case "${1:-status}" in
scope) run_scope "${2:-}" ;;
seams) run_seams ;;
build) run_build "${2:-}" ;;
build-all) run_build_all ;;
verify) run_verify ;;
repair) run_repair "${2:-}" ;;
status) show_status ;;
auto) auto_advance ;;
help|-h) usage ;;
*) usage ;;
esac

View File

@ -0,0 +1,66 @@
#!/usr/bin/env python3
"""Status reader for the emergency bootstrap network."""
import json
import os
STATE_DIR = os.path.join(os.path.dirname(__file__), '..', 'state')
STATUS_FILE = os.path.join(STATE_DIR, 'status.json')
def read_status():
if not os.path.exists(STATUS_FILE):
print("No status.json found. Pipeline not started.")
return
with open(STATUS_FILE, 'r') as f:
status = json.load(f)
print(f"=== {status.get('project_name', 'Unnamed Project')} ===")
print(f"Schema: v{status.get('schema_version', '?')}")
print(f"Current phase: {status.get('phase', '?')}")
print(f"Last updated: {status.get('last_updated', 'never')}")
print()
# Phase overview
print("PHASES:")
phases = status.get('phases', {})
phase_icons = {
"done": "",
"in_progress": "",
"failed": "",
"pending": "·",
}
for phase, state in phases.items():
icon = phase_icons.get(state, "?")
print(f" {icon} {phase}: {state}")
print()
# Slices
slices = status.get('slices', {})
if slices:
print("SLICES:")
for name, info in slices.items():
s = info.get('status', '?')
attempts = info.get('attempts', 0)
icon = {
"built": "",
"patched": "",
"building": "",
"fixing": "",
"blocked": "",
"failed": "",
"pending": "·",
}.get(s, "?")
line = f" {icon} {name}: {s} (attempts: {attempts})"
if info.get('last_error'):
line += f"{info['last_error']}"
if info.get('display_name') and info['display_name'] != name:
line += f" [{info['display_name']}]"
print(line)
print()
if status.get('notes'):
print(f"Notes: {status['notes']}")
if __name__ == '__main__':
read_status()

View File

@ -0,0 +1,55 @@
# /state — Emergency Bootstrap State Layer
This folder is the spine of the project-specific network. Every phase reads from here and writes to here.
## Structure
```
/state
status.json # where you are, what slices are blocked
scope.yaml # reduced task scope for this runtime
seams.yaml # upstream reuse and slice plan
verification.yaml # repo-specific verification result
slices/
slice-a.yaml # seeded manifest per implementation slice
slice-b.yaml
outputs/
slice-a/
delivery.md # current delivery bundle for the slice
slice-b/
delivery.md
```
## Rules
1. Every phase reads input from `/state`
2. Every phase writes durable output to `/state`
3. `status.json` is updated after every phase transition
4. If it is not in `/state`, downstream cannot rely on it
5. Slice manifests are seeded from `seams.yaml`
6. Safety rails still apply even when a delivery is blocked
## Status Values
### Phase status
- `pending`
- `in_progress`
- `done`
- `failed`
### Slice status
- `pending`
- `building`
- `built`
- `blocked`
- `fixing`
- `patched`
- `failed`
## Resume Rule
1. Open `status.json`
2. Find current phase and any blocked slices
3. Load the right `AGENTS.md`
4. Feed it the relevant artifacts from `/state`
5. Continue from the last durable artifact, not from memory

View File

@ -0,0 +1,15 @@
{
"schema_version": "3.1",
"project_name": "emergency-bootstrap",
"phase": "scope",
"phases": {
"scope": "pending",
"seams": "pending",
"build": "pending",
"verify": "pending",
"repair": "pending"
},
"slices": {},
"last_updated": "",
"notes": ""
}