Compare commits

...

98 Commits

Author SHA1 Message Date
bellman
879962b826 map g004 event report verification lanes
Give the leader a durable integration map for G004 event/report contracts, including ownership boundaries, focused verification commands, hazards, and worker commit evidence.\n\nConstraint: Task 5 is audit/coordination only and leader owns .omx/ultragoal checkpoints.\nRejected: Expanding into approval-token implementation | worker-2 owns that lane in the G004 split.\nConfidence: high\nScope-risk: narrow\nDirective: Re-run runtime and targeted tools contracts after integrating worker branches; do not infer Codex goal state from this doc.\nTested: map content backed by task 1/2/4 verification evidence and current git history.\nNot-tested: full workspace suite.
2026-05-14 18:14:24 +09:00
bellman
0b0d55d7ec omx(team): auto-checkpoint worker-1 [1] 2026-05-14 18:11:53 +09:00
bellman
7214573f35 Keep approval token contracts in their own runtime module
Constraint: G004 task 3 now owns approval-token contracts through rust/crates/runtime/src/approval_tokens.rs, while auto-integration left a duplicate unused copy in permissions.rs.\nRejected: suppressing dead-code warnings | the duplicate implementation was obsolete after the dedicated module landed.\nConfidence: high\nScope-risk: narrow\nDirective: Keep permission-mode authorization in permissions.rs and approval-token policy handoff in approval_tokens.rs.\nTested: cargo fmt --manifest-path rust/Cargo.toml --all -- --check; cargo check --manifest-path rust/Cargo.toml -p runtime; cargo test --manifest-path rust/Cargo.toml -p runtime approval_token -- --nocapture; cargo test --manifest-path rust/Cargo.toml -p runtime --test g004_conformance -- --nocapture\nNot-tested: full workspace test suite; G004 tasks 2/4/5 remain non-terminal.\n\nCo-authored-by: OmX <omx@oh-my-codex.dev>
2026-05-14 18:11:20 +09:00
bellman
dcf11f8190 harden report contract projection identity
Add a runtime report schema v1 contract so downstream consumers can negotiate structured fields, verify canonical report identity, and audit projection redactions without reverse-engineering prose.\n\nConstraint: Task 2 scope was limited to report schema/projection/redaction modules/docs/tests and prohibited .omx/ultragoal mutation.\nRejected: Wiring into broader CLI report emitters | kept this slice focused on the reusable contract and deterministic fixtures.\nConfidence: high\nScope-risk: narrow\nDirective: Future report emitters should build canonical payloads through CanonicalReportV1 before projecting audience-specific views.\nTested: cargo test -p runtime report_schema -- --nocapture; cargo test -p runtime lane_events -- --nocapture; cargo check -p runtime\nNot-tested: cargo clippy -p runtime --all-targets -- -D warnings remains blocked by pre-existing non-task warnings in compact.rs, file_ops.rs, policy_engine.rs, sandbox.rs.
2026-05-14 18:09:36 +09:00
bellman
f79ca989ba omx(team): merge worker-3 2026-05-14 18:07:29 +09:00
bellman
e1641aa010 Prove G004 contract bundles are machine-checkable
Constraint: Task 6 needed a regression harness without overwriting Task 1-4 implementation files.\nRejected: Editing lane_events/report-schema/approval-token owners directly | would create shared-file conflicts with active lanes.\nConfidence: high\nScope-risk: narrow\nDirective: Keep this harness as a consumer-facing conformance layer; extend fixtures after Task 2/3 land schema/token producers.\nTested: cd rust && cargo test -p runtime --test g004_conformance -- --nocapture; cd rust && cargo check -p runtime; cd rust && cargo fmt --check; git diff --check\nNot-tested: cargo clippy -p runtime --tests -- -D warnings fails on pre-existing runtime lint debt outside changed files.
2026-05-14 18:07:11 +09:00
bellman
5cebdd999d omx(team): auto-checkpoint worker-2 [3] 2026-05-14 18:07:05 +09:00
bellman
bf533d77a7 task: approval token chain
Add a runtime approval-token ledger so policy-blocked actions can require scoped owner grants, consume one-time tokens, reject replay, and retain delegation traceability.\n\nConstraint: Task 3 scope is the G004 approval-token chain for runtime event/report contract families.\nRejected: Extending the existing permission prompt path directly | the token contract can be tested independently without changing live tool authorization behavior.\nConfidence: high\nScope-risk: narrow\nDirective: Keep approval grants scoped to policy/action/repo/branch before wiring them into external execution paths.\nTested: cargo check --manifest-path rust/Cargo.toml --workspace; cargo test --manifest-path rust/crates/runtime/Cargo.toml; cargo test --manifest-path rust/crates/runtime/Cargo.toml approval_token -- --nocapture\nNot-tested: cargo clippy --manifest-path rust/crates/runtime/Cargo.toml --all-targets -- -D warnings is blocked by pre-existing warnings in compact.rs, file_ops.rs, policy_engine.rs, and sandbox.rs.
2026-05-14 18:07:03 +09:00
bellman
e34209ff7f omx(team): auto-checkpoint worker-2 [3] 2026-05-14 18:07:00 +09:00
bellman
ff37d395bb Stabilize G004 contract integration after worker merges
Constraint: G004 worker integrations introduced unparseable approval-token tests and a conformance path bug that blocked leader verification.\nRejected: waiting for another auto-integration cycle | local leader verification had exact parse and fixture failures to repair safely.\nConfidence: high\nScope-risk: moderate\nDirective: Keep approval-token regression tests in cfg(test) modules or integration tests, never inside type definitions.\nTested: cargo fmt --manifest-path rust/Cargo.toml --all -- --check; cargo check --manifest-path rust/Cargo.toml -p runtime; cargo test --manifest-path rust/Cargo.toml -p runtime approval_token -- --nocapture; cargo test --manifest-path rust/Cargo.toml -p runtime --test g004_conformance -- --nocapture; python3 .github/scripts/check_doc_source_of_truth.py\nNot-tested: full workspace test suite; remaining G004 tasks 1-5 still non-terminal.\n\nCo-authored-by: OmX <omx@oh-my-codex.dev>
2026-05-14 18:06:14 +09:00
bellman
f8d744bb37 omx(team): auto-checkpoint worker-1 [1] 2026-05-14 18:05:26 +09:00
bellman
c8c936ede1 omx(team): auto-checkpoint worker-3 [6] 2026-05-14 18:00:23 +09:00
bellman
57b3e3258b omx(team): auto-checkpoint worker-2 [3] 2026-05-14 18:00:19 +09:00
bellman
06e545325d omx(team): auto-checkpoint worker-1 [1] 2026-05-14 18:00:16 +09:00
bellman
ed3ccae844 omx(team): auto-checkpoint worker-4 [unknown] 2026-05-14 17:58:49 +09:00
bellman
f4e08d0ecf omx(team): auto-checkpoint worker-2 [3] 2026-05-14 17:58:46 +09:00
bellman
030f2ef20f omx(team): merge worker-2 2026-05-14 17:57:59 +09:00
bellman
16d6525de4 omx(team): auto-checkpoint worker-2 [3] 2026-05-14 17:57:59 +09:00
bellman
42c79218c9 Merge commit '4e0211d36c0180e787e73f96d52381f40a4c7ac4' 2026-05-14 17:54:45 +09:00
bellman
4e0211d36c Expose boot preflight evidence in diagnostic JSON
Task 5 needed machine-readable status/doctor evidence for reliable worker boot checks. This keeps the contract local to CLI diagnostics and validates relative trustedRoots handling for preflight allowlist decisions.

Constraint: G003 worker task forbids .omx/ultragoal mutation and scopes changes to session/preflight/doctor JSON surfaces.

Rejected: broad runtime worker boot refactor | other workers own worker_boot.rs and trust resolver implementation lanes.

Confidence: high

Scope-risk: narrow

Directive: Keep boot_preflight JSON fields stable for downstream automation; add fields rather than renaming existing keys.

Tested: cargo fmt --manifest-path rust/Cargo.toml --package rusty-claude-cli; cargo check --manifest-path rust/Cargo.toml -p rusty-claude-cli; cargo test --manifest-path rust/Cargo.toml -p rusty-claude-cli boot_preflight_snapshot_reports_machine_readable_contract_fields -- --nocapture; cargo test --manifest-path rust/Cargo.toml -p rusty-claude-cli branch_freshness_parses_ahead_behind_status_header -- --nocapture; cargo test --manifest-path rust/Cargo.toml -p rusty-claude-cli status_json_surfaces_session_lifecycle_for_clawhip -- --nocapture; cargo test --manifest-path rust/Cargo.toml -p rusty-claude-cli --test output_format_contract -- --nocapture

Not-tested: cargo clippy --manifest-path rust/Cargo.toml -p rusty-claude-cli --all-targets -- -D warnings fails on pre-existing runtime clippy warnings in compact.rs, file_ops.rs, policy_engine.rs, sandbox.rs before reaching changed CLI checks.
2026-05-14 17:52:41 +09:00
bellman
aec291caab omx(team): auto-checkpoint worker-4 [unknown] 2026-05-14 17:51:53 +09:00
bellman
43b182882a Lock doctor JSON boot preflight contract
Constraint: G003 boot/session work adds a structured doctor boot-preflight check that must be visible in JSON output.
Rejected: reducing the doctor check count back to six | boot preflight is an explicit G003 acceptance surface.
Confidence: high
Scope-risk: narrow
Directive: Keep doctor/status JSON contract tests aligned with boot_preflight schema fields when extending preflight diagnostics.
Tested: git diff --check; cargo fmt --manifest-path rust/Cargo.toml --all -- --check; cargo test --manifest-path rust/Cargo.toml -p runtime trusted_roots -- --nocapture; cargo test --manifest-path rust/Cargo.toml -p runtime startup -- --nocapture; cargo test --manifest-path rust/Cargo.toml -p runtime worker_boot -- --nocapture; cargo test --manifest-path rust/Cargo.toml -p tools path_scope -- --nocapture; cargo test --manifest-path rust/Cargo.toml -p rusty-claude-cli --test output_format_contract -- --nocapture; cargo check --manifest-path rust/Cargo.toml --workspace
Not-tested: full cargo test --workspace remains deferred during active G003 team reconciliation.

Co-authored-by: OmX <omx@oh-my-codex.dev>
2026-05-14 17:51:47 +09:00
bellman
307b23d27f omx(team): auto-checkpoint worker-4 [unknown] 2026-05-14 17:50:36 +09:00
bellman
8c11dd16f4 task: preserve startup no-evidence timestamp evidence
Lock the startup-no-evidence contract so prompt timestamps remain the original send time while lifecycle and pane timestamps prove timeout ordering.

Constraint: task 4 scope limited changes to runtime worker boot/session/startup modules and tests; .omx/ultragoal not mutated.

Rejected: CLI-surface changes | runtime evidence contract already exposes the typed worker.startup_no_evidence payload.

Confidence: high

Scope-risk: narrow

Directive: Keep startup timeout evidence timestamps stable across later lifecycle observations.

Tested: cargo test -p runtime worker_boot -- --nocapture; cargo check --workspace

Not-tested: cargo clippy -p runtime --tests -- -D warnings is blocked by pre-existing runtime warnings in compact.rs, file_ops.rs, policy_engine.rs, and sandbox.rs.
2026-05-14 17:50:33 +09:00
bellman
2012718749 Map G003 boot session verification
Document the current G003 worker boot, trust, session-control, and preflight verification surfaces so leader integration can sequence worker-owned patches without mutating Ultragoal state.\n\nConstraint: Task 2 is audit-only/coordination; no .omx/ultragoal mutation and no shared implementation/test edits.\nRejected: Fixing clippy warnings in runtime integration tests | outside audit-only scope and owned by integration cleanup.\nConfidence: high\nScope-risk: narrow\nDirective: Keep this map updated when G003 worker splits or verification commands change.\nTested: ../scripts/fmt.sh --check; cargo test -p runtime worker_boot -- --nocapture; cargo test -p tools worker_ -- --nocapture; cargo check -p runtime -p tools -p commands\nNot-tested: cargo clippy -p runtime -p tools -p commands --all-targets --no-deps -- -D warnings fails on pre-existing runtime integration_tests duration_suboptimal_units warnings.
2026-05-14 17:50:30 +09:00
bellman
79d3b809f9 omx(team): auto-checkpoint worker-4 [unknown] 2026-05-14 17:46:16 +09:00
bellman
9ec4d8398e omx(team): auto-checkpoint worker-3 [unknown] 2026-05-14 17:46:13 +09:00
bellman
5f45740408 omx(team): auto-checkpoint worker-2 [unknown] 2026-05-14 17:46:10 +09:00
bellman
675d9ddc78 Harden workspace path classification
Canonicalize absolute shell path operands before comparing them with the workspace root so symlink-expanded reads cannot be downgraded under workspace-write enforcement. Also resolves local clippy findings in the touched tools crate so targeted linting can run cleanly.\n\nConstraint: Task 1 scope is workspace/path scope enforcement only; do not mutate .omx/ultragoal.\nRejected: Editing shared path-scope regression tests | worker-3 owns that test coverage and the current tests already prove the contract.\nConfidence: high\nScope-risk: narrow\nDirective: Keep shell/file permission classification canonical-path based before permitting workspace-write execution.\nTested: ../scripts/fmt.sh --check; cargo test -p tools --test path_scope_enforcement -- --nocapture; cargo test -p tools given_workspace_write_enforcer_when_bash -- --nocapture; cargo check -p tools; cargo clippy -p tools --all-targets --no-deps -- -D warnings\nNot-tested: Full workspace clippy still has known unrelated runtime crate warnings outside this task scope.
2026-05-14 17:46:07 +09:00
bellman
087e31d190 Keep G003 integrated runtime tests compiling
Constraint: G003 worker outputs added config and startup evidence fields that must compile under focused runtime validation before leader push.
Rejected: pushing auto-checkpoints without leader validation | integrated tests initially failed to compile due missing imports and stale StartupEvidenceBundle fixtures.
Confidence: high
Scope-risk: narrow
Directive: When extending StartupEvidenceBundle, update all in-crate fixtures in the same change.
Tested: git diff --check; cargo fmt --manifest-path rust/Cargo.toml --all -- --check; cargo test --manifest-path rust/Cargo.toml -p runtime trusted_roots -- --nocapture; cargo test --manifest-path rust/Cargo.toml -p runtime startup -- --nocapture; cargo test --manifest-path rust/Cargo.toml -p runtime worker_boot -- --nocapture; cargo test --manifest-path rust/Cargo.toml -p tools path_scope -- --nocapture; cargo check --manifest-path rust/Cargo.toml --workspace
Not-tested: full cargo test --workspace remains deferred during active G003 team work.

Co-authored-by: OmX <omx@oh-my-codex.dev>
2026-05-14 17:45:46 +09:00
bellman
a6ee51baab omx(team): auto-checkpoint worker-3 [unknown] 2026-05-14 17:40:32 +09:00
bellman
6df60a4683 omx(team): auto-checkpoint worker-2 [unknown] 2026-05-14 17:40:29 +09:00
bellman
3cf0db8f79 omx(team): merge worker-1 2026-05-14 17:38:59 +09:00
bellman
964458ad4a omx(team): auto-checkpoint worker-1 [1] 2026-05-14 17:38:59 +09:00
bellman
d87c3e6400 Make roadmap PR intake durable for CC2
Constraint: User explicitly requested all roadmap PRs be merged when correct and mapped into the Ultragoal backlog when not immediately mergeable.
Rejected: leaving the PR inventory as ignored OMX-only state | roadmap merge obligations need a tracked handoff for later G011/G012 gates.
Confidence: high
Scope-risk: narrow
Directive: Refresh this intake after each roadmap PR merge batch and regenerate the CC2 board if ROADMAP.md changes.
Tested: gh pr list --state open --search roadmap in:title --json number,title,author,mergeable,isDraft,statusCheckRollup,headRefName,baseRefName,updatedAt,url --limit 200
Not-tested: individual PR mergeability was not forced in this intake commit.

Co-authored-by: OmX <omx@oh-my-codex.dev>
2026-05-14 17:36:15 +09:00
bellman
ac888623a8 Merge commit '3a8ce832341884322ede0855b150e3ceebe9180d' 2026-05-14 17:34:07 +09:00
bellman
3a8ce83234 Deny scoped file reads before tool dispatch
Worker-3's path-scope regression showed outside read_file paths were blocked by the workspace wrapper after dispatch instead of by the permission enforcer. File, glob, and grep tools now classify path scope before dispatch and require danger-full-access for paths that resolve outside the current workspace.

Constraint: G002-alpha-security requires permission-mode event/status visibility for blocked file and shell paths

Rejected: relying only on runtime wrapper errors | it hides the active permission-mode denial contract from callers

Confidence: high

Scope-risk: narrow

Directive: keep path-sensitive tool permission classification aligned with workspace wrapper resolution

Tested: cargo test -p tools --test path_scope_enforcement --manifest-path rust/Cargo.toml --quiet; cargo test -p tools given_workspace_write_enforcer_when_bash --manifest-path rust/Cargo.toml --quiet; cargo check --manifest-path rust/Cargo.toml --workspace; cargo fmt --all --manifest-path rust/Cargo.toml -- --check

Not-tested: full workspace test suite after this small permission-classification follow-up

Co-authored-by: OmX <omx@oh-my-codex.dev>
2026-05-14 17:34:03 +09:00
bellman
37b2b75287 Keep G002 path-scope tests aligned with enforced denials
Constraint: G002-alpha-security requires direct file-tool escapes to fail before reads while accepting the canonical runtime error text.
Rejected: weakening the test to accept successful reads | the verified behavior denies the escape and only the assertion vocabulary was stale.
Confidence: high
Scope-risk: narrow
Directive: Keep path-scope tests asserting denial semantics, not a single legacy wording.
Tested: cargo fmt --manifest-path rust/Cargo.toml --all -- --check; cargo test --manifest-path rust/Cargo.toml -p tools path_scope -- --nocapture; cargo test --manifest-path rust/Cargo.toml -p tools --test path_scope_enforcement -- --nocapture; cargo test --manifest-path rust/Cargo.toml -p runtime workspace_ -- --nocapture; cargo test --manifest-path rust/Cargo.toml -p rusty-claude-cli --test output_format_contract -- --nocapture; python3 -m pytest tests/test_security_scope.py -q; cargo check --manifest-path rust/Cargo.toml --workspace; git diff --check
Not-tested: full cargo test --workspace due known unrelated session_lifecycle_prefers_running_process_over_idle_shell failure.

Co-authored-by: OmX <omx@oh-my-codex.dev>
2026-05-14 17:33:47 +09:00
bellman
f2dc615a8a Prevent workspace escape through tool path resolution
File and shell tool dispatch now resolves path-sensitive operations through workspace-scoped wrappers so direct paths, globs, symlinks, shell expansion, and Windows absolute path probes fail before execution when they leave the workspace.

Constraint: G002-alpha-security requires alpha-blocking workspace/path scope enforcement without mutating .omx/ultragoal

Rejected: string-prefix only checks | they miss canonical symlink and glob expansion escapes

Confidence: high

Scope-risk: moderate

Directive: keep new file/shell tool entrypoints wired through workspace-aware wrappers before dispatch

Tested: python3 -m unittest discover -s tests -v; python3 -m compileall -q src tests; cargo test -p runtime workspace --manifest-path rust/Cargo.toml --quiet; cargo test -p tools workspace --manifest-path rust/Cargo.toml --quiet; cargo test -p tools given_workspace_write_enforcer_when_bash --manifest-path rust/Cargo.toml --quiet; cargo test -p tools file_tools_reject --manifest-path rust/Cargo.toml --quiet; cargo fmt --all --manifest-path rust/Cargo.toml -- --check; cargo check --manifest-path rust/Cargo.toml --workspace

Not-tested: full unfiltered cargo test workspace due task-time constraints; targeted runtime/tools workspace security tests and full cargo check passed

Co-authored-by: OmX <omx@oh-my-codex.dev>
2026-05-14 17:30:57 +09:00
bellman
9bc55f9946 omx(team): auto-checkpoint worker-1 [1] 2026-05-14 17:30:54 +09:00
bellman
180ebb3b02 Reject Windows absolute PowerShell paths from workspace scope
The G002 security gate caught that PowerShell path classification still treated Windows absolute paths as workspace-relative on POSIX, so workspace scope now rejects those tokens before permission downgrades.

Constraint: G002-alpha-security requires workspace/path scope across Windows path cases as well as direct paths, symlinks, globbing, shell expansion, and worktrees.

Rejected: Relying on PathBuf::is_absolute for Windows syntax on POSIX | it treats C:\ and UNC-like tokens as relative and weakens permission classification.

Confidence: high

Scope-risk: narrow

Directive: Keep bash and PowerShell path classifiers aligned whenever new shell syntax is admitted.

Tested: cargo test --manifest-path rust/Cargo.toml -p tools path_scope -- --nocapture; cargo test --manifest-path rust/Cargo.toml -p tools --test path_scope_enforcement -- --nocapture; cargo test --manifest-path rust/Cargo.toml -p runtime workspace_ -- --nocapture; python3 -m pytest tests/test_security_scope.py -q; cargo check --manifest-path rust/Cargo.toml --workspace.

Not-tested: Full cargo test --workspace still has existing unrelated rusty-claude-cli session lifecycle failure reported by workers.

Co-authored-by: OmX <omx@oh-my-codex.dev>
2026-05-14 17:29:57 +09:00
bellman
534442b8da Document G002 security verification ownership for integration
Constraint: Task 5 is reporting/map ownership only; worker-1 owns implementation changes and shared security/path tests.\nRejected: Editing runtime enforcement failures from this lane | shared implementation/test ownership belongs to other workers unless re-scoped.\nConfidence: high\nScope-risk: narrow\nDirective: Keep this artifact synchronized with exact verification output before leader aggregation.\nTested: python3 scripts/validate_cc2_board.py --board .omx/cc2/board.json; python3 .omx/cc2/validate_issue_parity_intake.py .omx/cc2/issue-parity-intake.json; scripts/fmt.sh --check; cargo check --workspace; targeted runtime permission/path tests; mock parity harness.\nNot-tested: Full clippy and cargo test --workspace are not green due pre-existing/shared runtime/CLI failures documented in the artifact.
2026-05-14 17:29:33 +09:00
bellman
9c2ebb4f39 task: prefer tests before fixes
Add focused regression coverage for path-scope enforcement before implementation changes land, preserving worker-1 ownership of the fix path.

Constraint: task 4 requested tests-first coverage for direct path, symlink, glob/shell expansion, worktree, and Windows-style path cases.\nRejected: implementation edits in enforcement code | worker-1 owns minimal implementation changes.\nConfidence: high\nScope-risk: narrow\nDirective: Keep these regressions red until path canonicalization/enforcement blocks outside-workspace reads before dispatch.\nTested: cargo fmt -p tools -- --check; cargo check -p tools; cargo clippy -p tools --test path_scope_enforcement (warnings only, pre-existing); cargo test -p tools --test path_scope_enforcement (expected red: 4 failing path-scope gaps, 2 passing baselines).\nNot-tested: Full workspace test suite because the new regression tests intentionally fail until implementation lands.
2026-05-14 17:29:31 +09:00
bellman
2c48400293 omx(team): auto-checkpoint worker-3 [4] 2026-05-14 17:27:21 +09:00
bellman
713ca7aee4 omx(team): auto-checkpoint worker-1 [1] 2026-05-14 17:27:18 +09:00
bellman
02b591ac64 omx(team): auto-checkpoint worker-3 [4] 2026-05-14 17:22:09 +09:00
bellman
f789525839 omx(team): auto-checkpoint worker-1 [1] 2026-05-14 17:22:06 +09:00
bellman
b1d8a66515 Gate CC2 completion on PR and issue resolution
The Ultragoal now has an explicit repository-operations gate so final completion cannot rely only on roadmap implementation while correct PRs or resolvable issues remain unhandled.

Constraint: The user explicitly added that all PRs should be merged and all issues resolved when they are correct and resolvable.

Rejected: Treating the existing roadmap board as sufficient | it did not require per-PR and per-issue final triage evidence.

Confidence: high

Scope-risk: narrow

Directive: Refresh GitHub PR and issue snapshots at the final gate; do not merge unsafe or incorrect PRs merely to reduce counts.

Tested: gh auth status; gh pr list --state open --limit 200 captured 50 records; gh issue list --state open --limit 1000 captured 1000 records.

Not-tested: Full PR/issue triage is deferred to the dedicated gate and later streams.

Co-authored-by: OmX <omx@oh-my-codex.dev>
2026-05-14 17:21:21 +09:00
bellman
ad9e0234a9 omx(team): auto-checkpoint worker-1 [1] 2026-05-14 17:19:25 +09:00
bellman
145413d624 omx(team): auto-checkpoint worker-4 [5] 2026-05-14 17:19:01 +09:00
bellman
17da2964d7 omx(team): auto-checkpoint worker-3 [4] 2026-05-14 17:18:58 +09:00
bellman
9ab569e626 omx(team): auto-checkpoint worker-2 [3] 2026-05-14 17:18:55 +09:00
bellman
4af5664ff8 omx(team): auto-checkpoint worker-1 [1] 2026-05-14 17:18:52 +09:00
bellman
1864ce38ad omx(team): auto-checkpoint worker-3 [4] 2026-05-14 17:18:06 +09:00
bellman
74cc590407 omx(team): auto-checkpoint worker-1 [1] 2026-05-14 17:18:03 +09:00
bellman
a4b20ea34d omx(team): merge worker-3 2026-05-14 17:17:12 +09:00
bellman
8d0cee46d5 omx(team): auto-checkpoint worker-3 [4] 2026-05-14 17:17:11 +09:00
bellman
45b43b5a96 Make the CC2 board schema executable for G001
The canonical Stream 0 board must be machine-checkable before Ultragoal can checkpoint G001, so the generated board and validation wrapper now share the same rich board schema and Markdown renderer.

Constraint: G001 requires .omx/cc2/board.json and .omx/cc2/board.md to prove all frozen ROADMAP.md headings and ordered actions are mapped.

Rejected: Relying on worker-reported validation alone | leader-side validation found schema drift between the status-only and lifecycle_status board entrypoints.

Confidence: high

Scope-risk: narrow

Directive: Keep scripts/generate_cc2_board.py, scripts/validate_cc2_board.py, scripts/cc2_board.py, and .omx/cc2/render_board_md.py aligned on board schema changes.

Tested: python3 scripts/generate_cc2_board.py; python3 scripts/validate_cc2_board.py; python3 scripts/cc2_board.py validate; python3 .omx/cc2/validate_issue_parity_intake.py; python3 .omx/cc2/render_board_md.py .omx/cc2/board.json .omx/cc2/board.md --check; python3 -m py_compile scripts/generate_cc2_board.py scripts/validate_cc2_board.py scripts/cc2_board.py .omx/cc2/validate_issue_parity_intake.py .omx/cc2/render_board_md.py; cargo check --manifest-path rust/Cargo.toml --workspace.

Not-tested: Full cargo test workspace has unrelated existing failures reported by workers in session lifecycle/permission-mode tests.

Co-authored-by: OmX <omx@oh-my-codex.dev>
2026-05-14 17:14:07 +09:00
bellman
d15268e2cc Create a canonical CC2 board so every frozen ROADMAP heading is verifiably mapped
Derive the board from ROADMAP.md heading anchors and record the required local research and adaptive-plan sources as immutable manifest metadata. Add a validation command that fails if any ROADMAP heading lacks a board item or required lifecycle fields.

Constraint: Workers must not mutate .omx/ultragoal; board outputs live under .omx/cc2 and source research is read-only.
Rejected: Hand-maintained board rows | too easy to leave ROADMAP headings unmapped and hard to validate.
Confidence: high
Scope-risk: narrow
Directive: Regenerate with scripts/cc2_board.py after ROADMAP.md changes, then run the validate command before checkpointing G001.
Tested: python3 -m py_compile scripts/cc2_board.py; python3 scripts/cc2_board.py validate; cargo check --workspace; cargo fmt --all -- --check
Not-tested: cargo test --workspace has unrelated failing rusty-claude-cli lifecycle assertion tests::session_lifecycle_prefers_running_process_over_idle_shell.
2026-05-14 17:08:52 +09:00
bellman
424825f8cb task: G001 human board and docs rendering
Render the canonical CC2 board into a human-readable review artifact while preserving worker-1's generated schema as the source of truth.\n\nConstraint: leader owns Ultragoal state; .omx/ultragoal was not mutated.\nRejected: hand-editing board.md without a renderer | it would make coverage drift harder to validate.\nConfidence: high\nScope-risk: narrow\nDirective: regenerate board.md with .omx/cc2/render_board_md.py after board.json changes.\nTested: python3 .omx/cc2/render_board_md.py .omx/cc2/board.json .omx/cc2/board.md --check; python3 -m py_compile .omx/cc2/render_board_md.py; cargo check --workspace; cargo test --workspace (fails one unrelated lifecycle test).\nNot-tested: cargo test --workspace is not fully green because tests::session_lifecycle_prefers_running_process_over_idle_shell fails persistently in rusty-claude-cli without touching Rust sources.
2026-05-14 17:08:49 +09:00
bellman
07dad88e8c Classify issue and parity intake for CC2 board integration
Constraint: Task 3 scope is limited to G001 issue/parity intake and must not mutate .omx/ultragoal
Rejected: Editing canonical board.json directly | worker-1 owns Task 2 canonical board output and coordination requires a mergeable fragment
Confidence: high
Scope-risk: narrow
Directive: Integrate these rows into .omx/cc2/board.json and board.md without reclassifying the frozen evidence unless the source snapshot changes
Tested: python3 .omx/cc2/validate_issue_parity_intake.py; python3 -m py_compile .omx/cc2/validate_issue_parity_intake.py; python3 -m json.tool .omx/cc2/issue-parity-intake.json; cargo check --manifest-path rust/Cargo.toml --workspace
Not-tested: cargo test --manifest-path rust/Cargo.toml --workspace has 2 pre-existing/environmental failures in rusty-claude-cli tests unrelated to .omx/cc2 intake files
2026-05-14 17:07:43 +09:00
bellman
5c77896dec omx(team): auto-checkpoint worker-1 [1] 2026-05-14 17:07:40 +09:00
bellman
74bbf4b36f omx(team): auto-checkpoint worker-4 [unknown] 2026-05-14 17:00:14 +09:00
bellman
481585f865 omx(team): auto-checkpoint worker-1 [1] 2026-05-14 17:00:11 +09:00
bellman
c6e2a7dee4 omx(team): merge worker-1 2026-05-14 16:58:43 +09:00
bellman
83116555ff omx(team): auto-checkpoint worker-1 [1] 2026-05-14 16:58:43 +09:00
YeonGyu-Kim
8f55870dad docs(roadmap): add #448 — sandbox JSON has contradictory enabled/supported/active flags
Pinpoint: 'enabled:true, supported:false' is semantic nonsense.
'filesystem_active:true allowed_mounts:[]' contradicts 'workspace-only'.
'active:false filesystem_active:true' has no documented aggregation rule.
Renaming 'enabled' to 'requested' and exposing 'active_components:[]'
would surface real isolation state to automation.
2026-05-11 23:32:30 +09:00
YeonGyu-Kim
7244a82b36 docs(roadmap): add #447 — JSON error envelopes go to stderr; stdout empty on error
Pinpoint: claw --no-such-flag --output-format json writes the JSON
envelope to stderr (115 bytes) while stdout is 0 bytes. Same for
missing_credentials, session_load_failed, invalid_model_syntax —
all 4 error kinds tested put JSON on stderr. Breaks the standard
'output=$(cmd --output-format json)' pattern. Every major CLI
(kubectl/gh/aws/jq/terraform -json) puts JSON on stdout regardless
of success/failure. Sibling: deprecation warnings precede the JSON
envelope on stderr, breaking 'tail -1 | jq' parsing.
2026-05-11 23:01:46 +09:00
YeonGyu-Kim
5ab969e7ae docs(roadmap): add #446 — config loaded 2-3x per invocation; identical deprecation warnings spam
Pinpoint: status emits 3x deprecation warnings, doctor 2x, mcp 2x,
version 0x. Each duplicate is byte-identical (same file/line/field).
Config-load pipeline is fanned out across commands without a cache.
15 redundant disk reads in worst case. Real warnings drowned out by
copies. Count fluctuates between HEADs (3 at 6c0c305a, 4 at d7dbe951,
3 at 5a4cc506) — no architectural fix landed.
2026-05-11 22:33:34 +09:00
YeonGyu-Kim
5a4cc506d5 docs(roadmap): add #445 — skill name-vs-dirname mismatch silently accepted; sibling silent drops
Pinpoint: .claw/skills/wrong-name/SKILL.md with frontmatter name:
actually-different-name silently loads as the frontmatter name. Users
referencing by dir name get skill_not_found while skills list shows
the frontmatter name. Siblings: subdir without SKILL.md silently
skipped; loose .md at skills root dropped; no --scope filter for
workspace vs user merge.
2026-05-11 22:01:12 +09:00
YeonGyu-Kim
9e1eafd02d docs(roadmap): add #444 — no broad-cwd guard for --resume; ROOT/HOME silently writable
Pinpoint: claw --resume latest from / hits 'Read-only file system'
(OS error 30) — only saved by root being read-only. From /tmp or
$HOME, silently creates .claw/sessions/<fingerprint>/ droppings.
Exit code 0 on the read-only-FS error. Stale /tmp/.claw from 13:31
dogfood still present at 21:30 (10 hours, 6+ HEADs later) — #435's
deferred-creation fix hasn't landed. The broad-cwd guard only covers
shorthand prompt path, not resume/status/doctor.
2026-05-11 21:31:33 +09:00
YeonGyu-Kim
b2048856f3 docs(roadmap): add #443 — acp serve exits 0 with status:discoverability_only; #413 still unfixed
Pinpoint: claw acp serve --output-format json exits 0 with explicit
'not implemented' message + supported:false. Automation gating on $?
sees success from a no-op. ROADMAP #413's internal-tracking leak
(discoverability_tracking, tracking) confirmed UNFIXED 11 days later.
Sibling: claw acp status returns kind:unknown (14th catch-all occurrence).
2026-05-11 21:01:24 +09:00
YeonGyu-Kim
19aaf9d05e docs(roadmap): add #442 — agents require TOML format, .md files silently dropped
Pinpoint: claw-code only loads .toml files from .claw/agents/. Claude
Code uses .md with YAML frontmatter — schema divergence. Source code
at commands/src/lib.rs:3378 silently skips non-.toml extensions with
no warning. Help text omits the format requirement. Same silent-drop
pattern as #440 (MCP) and #441 (hooks). Also: .claude/agents/ never
discovered; required fields undocumented; no scaffolding command.
2026-05-11 20:31:50 +09:00
YeonGyu-Kim
8499599b70 docs(roadmap): add #441 — hooks schema diverges from Claude Code documented format
Pinpoint: claw-code expects {hooks:{PreToolUse:['cmd-string']}} while
Claude Code docs specify {hooks:{PreToolUse:[{matcher,hooks:[{type,command}]}]}}.
Users copy-pasting from Claude Code docs get the cryptic 'must be an
array of strings, got an array' error. PR #3000 already addresses
this but is conflicting and unmerged. Siblings: unknown hook event
rejects entire hooks config (#440 pattern); first-error-only halting;
kind:unknown catch-all (13th occurrence).
2026-05-11 20:01:33 +09:00
YeonGyu-Kim
86ff83c233 docs(roadmap): add #440 — one invalid mcpServers entry blocks ALL valid servers
Pinpoint: .claw.json with one valid mcpServers entry + one missing-command
entry → mcp list returns configured_servers:0, servers:[]. The valid
server is silently dropped because parser halts on first error.
Five invalid entries in the same file produce only ONE error message
(first one); user must iterate N times to discover all problems.
Violates ROADMAP product principle #5 (partial success first-class).
2026-05-11 19:31:23 +09:00
YeonGyu-Kim
bd126905db docs(roadmap): add #439 — ancestor CLAUDE.md walk causes silent context bleed
Pinpoint: from /tmp/proj/sub/deep, claw walks ALL ancestors loading
every CLAUDE.md up to $HOME boundary. Stale /tmp/CLAUDE.md silently
bleeds into every workspace under /tmp/*. No --no-parent-memory flag,
no .claw-root boundary marker, no per-file attribution in status JSON.
Git-root is NOT a discovery boundary either.
2026-05-11 19:01:50 +09:00
YeonGyu-Kim
f4a9674086 docs(roadmap): add #438 — memory file discovery only finds CLAUDE.md, ignores AGENTS.md + CLAW.md
Pinpoint: claw-code reads CLAUDE.md (inherited from upstream Claude Code)
but silently ignores AGENTS.md (industry convention used by OpenCode/
Codex/Aider/Cursor/Continue.dev) and CLAW.md (project's own brand name).
Users with mixed-tool workflows maintaining a shared AGENTS.md see
memory_file_count stay low with no warning.
2026-05-11 18:31:17 +09:00
YeonGyu-Kim
d3a982dda9 docs(roadmap): add #437 — version JSON missing is_dirty/branch/commit_date/rustc; git_sha truncated
Pinpoint: claw version --output-format json omits is_dirty, branch,
commit_date, commit_timestamp, rustc_version. git_sha is 7-char short
form (collision risk + no git rev-parse round-trip). executable_path
leaks compile-host path /tmp/claw-dog-0530/... Sibling: prose 'message'
field still duplicates structured data (#391 supposedly fixed).
2026-05-11 18:00:57 +09:00
YeonGyu-Kim
8cf628a53c docs(roadmap): add #436 — init template sets permissions.defaultMode:dontAsk + empty .claw/
Pinpoint: claw init creates .claw.json with permissions.defaultMode:
dontAsk (disabled permission prompts by default) — compounds with #428.
Sibling: .claw/ artifact created as an empty directory (no
settings.json template inside). When .claw/ pre-exists, init skips
the entire artifact without materializing expected sub-content.
2026-05-11 17:31:17 +09:00
YeonGyu-Kim
b8f989b605 docs(roadmap): add #435 — --resume failure: exit 0 text/1 json + creates partition dir
Pinpoint: claw --resume latest on fresh workspace exits 0 in text mode
but 1 in JSON mode (same input, different outcome). Side effect:
.claw/sessions/<fingerprint>/ created on disk despite failure. Siblings:
claw --compact alone drops into REPL; claw --compact 'hello' rejects
shorthand prompt; kind:unknown catch-all yet again.
2026-05-11 17:01:30 +09:00
YeonGyu-Kim
e29010ed48 docs(roadmap): add #434 — POSIX -- separator not recognized; shorthand prompts can't start with dash
Pinpoint: claw -- 'anything' returns 'unknown option: --' with the
misleading 'Did you mean -V?' hint. Every other major CLI (cargo,
git, gh, kubectl, grep) honors POSIX -- as end-of-flags. Shorthand
prompt mode cannot accept any TEXT starting with - or --, forcing
users to remember the explicit 'prompt' verb.
2026-05-11 16:31:21 +09:00
YeonGyu-Kim
0e5f695844 docs(roadmap): add #433 — repeated --output-format silent override + case-sensitive enum
Pinpoint: --output-format json --output-format text silently picks
text, no warning, scripts that compose flags get wrong format.
Siblings: JSON (uppercase) rejected as kind:unknown; CLAW_OUTPUT_FORMAT
env silently ignored; RUST_LOG/CLAW_LOG undocumented.
2026-05-11 16:01:05 +09:00
YeonGyu-Kim
ce39d5c598 docs(roadmap): add #432 — --allowedTools naming inconsistency + missing-value parser bug
Pinpoint: tool-name registry mixes snake_case/PascalCase/UPPERCASE
in single error message; undocumented CamelCase->snake_case alias map
(Read->read_file etc.); missing flag value consumes next positional
(subcommand swallowed). kind:unknown catch-all yet again.
2026-05-11 15:31:25 +09:00
YeonGyu-Kim
fad53e2df9 docs(roadmap): add #431 — skills uninstall requires creds; install error leaks OS string
Pinpoint: claw skills uninstall <bogus> requires API creds despite
being a pure local filesystem op. Siblings: skills install <bogus>
returns raw 'No such file or directory (os error 2)' with kind:unknown;
skills install (no args) treats valid subcommand as unknown action;
agents create doesn't exist (no scaffolding command for agents).
2026-05-11 15:03:45 +09:00
YeonGyu-Kim
328fd114ff docs(roadmap): add #430 — dump-manifests requires upstream TS source; export PATH dropped
Pinpoint: dump-manifests --help advertises 'emit manifests for current
cwd' but actually requires CLAUDE_CODE_UPSTREAM env or --manifests-dir
pointing at upstream TypeScript Claude Code source. Unusable for users
without the original TS repo. Siblings: derivative-work disclosure leak,
kind drift between manifests-dir override path vs default path, export
<PATH> positional silently dropped before validation.
2026-05-11 15:01:37 +09:00
YeonGyu-Kim
075c214439 docs(roadmap): add #429 — no global --cwd flag; misleading 'Did you mean --acp' hint
Pinpoint: claw --cwd PATH rejected as unknown option globally. --cwd
exists ONLY for system-prompt subcommand. Every other major CLI
(cargo -C, git -C, npm --prefix) has global cwd override. Sibling:
'Did you mean --acp?' hint algorithm matches first-character not
semantic category — --acp is ACP/Zed integration, unrelated to cwd.
2026-05-11 14:31:31 +09:00
YeonGyu-Kim
ec882f4c88 docs(roadmap): add #428 — default permission_mode is danger-full-access
Pinpoint: claw runs with full filesystem+network+tool access by default,
no opt-in flag, doctor stays silent. Fix shape: change default to
workspace-write, require explicit opt-in for danger-full-access, add
permissions check to doctor that warns when mode source is default.
Siblings: kind:unknown for invalid_permission_mode (typed-error
catch-all bug), --skip-permissions flag rejected (Claude Code parity).
2026-05-11 14:00:58 +09:00
YeonGyu-Kim
7204844982 docs(roadmap): add #427 — subcommand --help requires auth/config; resume hits auth gate
Pinpoint: claw resume --help, session --help, compact --help all hit
missing_credentials without producing usage. resume <bogus-id> also
requires API creds instead of local-first session_not_found lookup.
Sibling: exit code 0 on these error envelopes (parity bug from #422).
2026-05-11 13:31:38 +09:00
YeonGyu-Kim
1fecdf096b docs(roadmap): add #426 — ANTHROPIC_MODEL env bypasses invalid_model validator
Pinpoint: --model rejects 'bogus-model-xyz' as invalid_model_syntax
but ANTHROPIC_MODEL=bogus-model-xyz returns status:ok with the bogus
value. Siblings: opus alias resolves to 4-6 not 4-7 (current frontier),
CLAW_MODEL and ANTHROPIC_DEFAULT_MODEL silently ignored.
2026-05-11 13:01:08 +09:00
YeonGyu-Kim
3730b459a2 docs(roadmap): add #425 — config precedence undocumented; deprecation warning 4×
Pinpoint: .claw/settings.json silently wins over .claw.json. config
--output-format json reports both loaded:true with no precedence_rank
or per-key attribution. Sibling: deprecation warning fired 3× in
#424's probe, now 4× — config load count regressing upward.
2026-05-11 12:31:16 +09:00
YeonGyu-Kim
d7dbe951ce docs(roadmap): add #424 — bare canonical model names rejected; stale 4-6 suggestion
Pinpoint from Jobdori dogfood. claw --model claude-opus-4-7 returns
invalid_model_syntax error suggesting 'claude-opus-4-6' (one model
behind). Sibling: settings.json deprecation warning repeats 3x per
status invocation (config loaded 3x).
2026-05-11 12:01:33 +09:00
YeonGyu-Kim
6c0c305a4b docs(roadmap): add #423 — claw prompt ignores stdin; kind:unknown for missing arg
Pinpoint from Jobdori dogfood. `echo X | claw prompt` returns
'prompt subcommand requires a prompt string' instead of reading stdin.
Sibling: error kind is 'unknown' not typed 'missing_argument'.
2026-05-11 11:31:12 +09:00
YeonGyu-Kim
3c563fa1dc docs(roadmap): add #422 — unknown subcommand silently sent as chat prompt
Pinpoint from Jobdori dogfood. claw <bogus> with valid creds reaches
Anthropic API as a chat message. Sibling exit-code parity bug:
api_http_error envelope exits 0 while cli_parse exits 1.
2026-05-11 11:01:09 +09:00
YeonGyu-Kim
6aa4b85c95 docs(roadmap): add #421 — JSON cwd leaks /private symlink canonicalization on macOS
Pinpoint from Jobdori dogfood on b98b9a71 in response to Clawhip nudge.
status.workspace.cwd, mcp.working_directory all canonicalize cwd
(intentional for #151 session bleed) but leak the result into JSON
output, breaking string-match automation across macOS symlinks.
2026-05-11 10:32:18 +09:00
Jobdori
b98b9a712e fix(fmt): expand Thinking struct literals to pass cargo fmt 2026-05-09 15:52:54 +09:00
YeonGyu-Kim
357629dbd9 fix(skills): route help flags to local dispatch + fix push_output_block test arity
Cherry-pick from Yeachan-Heo's #2945 with manual conflict resolution:
- classify_skills_slash_command now catches -h/--help anywhere in args
- Restored pending_thinking parameter in push_output_block test calls

Co-authored-by: Yeachan-Heo <bellman@ultraworkers.dev>
2026-05-06 15:41:25 +09:00
YeonGyu-Kim
12b65f9807 Merge pull request #3001 from ultraworkers/fix/batch-issue-fixes
fix: REPL display, /compact panic, identity leak, DeepSeek reasoning, thinking blocks
2026-05-06 15:33:03 +09:00
YeonGyu-Kim
75c08bc982 fix: REPL display, /compact panic, identity leak, DeepSeek reasoning, thinking blocks
Five interrelated fixes from parallel Hephaestus sessions:

1. fix(repl): display assistant text after spinner (#2981, #2982, #2937)
   - Added final_assistant_text() call after run_turn spinner completes
   - REPL now shows response text like run_prompt_json does

2. fix(compact): handle Thinking content blocks (#2985)
   - Added ContentBlock::Thinking variant throughout compact summarizer
   - Prevents panic when /compact encounters thinking blocks

3. fix(prompt): provider-aware model identity (#2822)
   - New ModelFamilyIdentity enum (Claude vs Generic)
   - Non-Anthropic models no longer say 'I am Claude'
   - model_family_identity_for() detects provider and sets identity

4. fix(openai): preserve DeepSeek reasoning_content (#2821)
   - Stream parser now captures reasoning_content from OpenAI-compat
   - Emits ThinkingDelta/SignatureDelta events for reasoning models
   - Thinking blocks included in conversation history for re-send

5. feat(runtime): Thinking block support across codebase
   - AssistantEvent::Thinking variant in conversation.rs
   - ContentBlock::Thinking in session serialization
   - Thinking-aware compact summarization
   - Tests for thinking block ordering and content

Closes #2981, #2982, #2937, #2985, #2822, #2821
2026-05-06 15:32:34 +09:00
58 changed files with 22475 additions and 195 deletions

14886
.omx/cc2/board.json Normal file

File diff suppressed because one or more lines are too long

842
.omx/cc2/board.md Normal file

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,429 @@
{
"schema_version": "cc2.issue_parity_intake.v1",
"generated_at": "2026-05-14T08:02:00Z",
"task_id": "3",
"owner": "worker-2",
"goal": "G001-stream0-board",
"notes": [
"Leader owns Ultragoal; this artifact does not mutate .omx/ultragoal.",
"Rows are scoped intake/classification evidence for Worker 1/Task 2 board integration."
],
"source_manifest": {
"claw_open_latest": {
"path": ".omx/research/claw-open-latest.json",
"sha256_prefix_from_plan": "89e3e027fa735f38",
"covered_issue_numbers": [3028, 3029, 3030, 3031, 3032, 3033, 3034, 3035, 3036, 3037, 3038]
},
"claw_issues": {
"path": ".omx/research/claw-issues.json",
"sha256_prefix_from_plan": "e64fdba7df3b78ed",
"covered_issue_numbers": [2997, 3003, 3004, 3005, 3006, 3007, 3020, 3023]
},
"opencode": {
"repo_path": ".omx/research/repos/opencode",
"metadata_path": ".omx/research/opencode-repo.json",
"issues_path": ".omx/research/opencode-issues.json",
"head_from_plan": "27ac53aaacc677b1401c4e75ca7a7dadf8b2c349"
},
"codex": {
"repo_path": ".omx/research/repos/codex",
"metadata_path": ".omx/research/codex-repo.json",
"issues_path": ".omx/research/codex-issues.json",
"head_from_plan": "6a225e4005209f2325ab3c681c7c6beba2907d4d"
}
},
"issue_clusters": [
{
"id": "CC2-ISSUE-3007",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3007",
"source_type": "github_issue",
"source_number": 3007,
"title": "Permission modes do not enforce path scope on file tools or shell expansion in bash",
"theme": "security/path-scope",
"release_bucket": "alpha_blocker",
"lifecycle_status": "active",
"roadmap_anchor": "ROADMAP.md#11-policy-engine-for-autonomous-coding; ROADMAP.md#9-green-ness-contract",
"dependencies": ["permission path canonicalization", "file tool target validation", "bash command/path validation reachability", "policy regression fixtures"],
"verification_required": ["workspace-write cannot read/write/delete outside workspace", "shell expansion and symlink traversal are rejected or policy-blocked", "file tools and bash use the same target-scope decision record"],
"deferral_rationale": null,
"classification_rationale": "Security/sandbox escape class; plan names #3007 as alpha blocker."
},
{
"id": "CC2-ISSUE-3020",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3020",
"source_type": "github_issue",
"source_number": 3020,
"title": "OpenAI-compatible model IDs with slashes are stripped before request",
"theme": "provider/model-routing",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"roadmap_anchor": "ROADMAP.md#provider-routing-model-name-prefix-must-win-over-env-var-presence-fixed-2026-04-08-0530c50",
"dependencies": ["provider profile contract", "wire model-id preservation option", "routing-prefix source reporting"],
"verification_required": ["OpenAI-compatible endpoint receives exact model id when preservation is enabled", "status JSON reports raw model input, route, and wire model id"],
"deferral_rationale": null,
"classification_rationale": "Core provider correctness but below alpha state/security contracts unless it blocks the selected alpha model path."
},
{
"id": "CC2-ISSUE-3006",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3006",
"source_type": "github_issue",
"source_number": 3006,
"title": "Not Working in windows",
"theme": "windows/install",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"roadmap_anchor": "ROADMAP.md#immediate-backlog-from-current-real-pain",
"dependencies": ["Windows support policy", "PowerShell install path", "dependency/version matrix", "diagnostic setup output"],
"verification_required": ["fresh Windows/PowerShell setup smoke documented", "unsupported native paths fail with actionable WSL2/native guidance"],
"deferral_rationale": null,
"classification_rationale": "Real adoption blocker; plan places Windows/install in beta adoption overlay."
},
{
"id": "CC2-ISSUE-3005",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3005",
"source_type": "github_issue",
"source_number": 3005,
"title": "DeepSeek V4-flash/pro fails with 400 Bad Request (missing reasoning_content) while deepseek-reasoner works",
"theme": "provider/response-shape",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"roadmap_anchor": "ROADMAP.md#5-failure-taxonomy; ROADMAP.md#provider-routing-model-name-prefix-must-win-over-env-var-presence-fixed-2026-04-08-0530c50",
"dependencies": ["OpenAI-compatible diagnostics playbook", "provider error taxonomy", "reasoning/thinking field compatibility tests"],
"verification_required": ["provider 400 response classified with actionable remediation", "DeepSeek-compatible response-shape fixture does not hide assistant output"],
"deferral_rationale": null,
"classification_rationale": "Provider compatibility issue that shares the #3032 diagnostics lane."
},
{
"id": "CC2-ISSUE-3004",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3004",
"source_type": "github_issue",
"source_number": 3004,
"title": "When can we adapt to zed?",
"theme": "ide/acp",
"release_bucket": "ga_ecosystem",
"lifecycle_status": "deferred_with_rationale",
"roadmap_anchor": "ROADMAP.md#phase-5-plugin-and-mcp-lifecycle-maturity",
"dependencies": ["stable session/control API", "plugin/MCP lifecycle", "engine API or ACP bridge decision"],
"verification_required": ["Zed/ACP smoke once core state/control contracts exist"],
"deferral_rationale": "IDE integration is valuable but should wait until boot/session/event/control truth surfaces are stable.",
"classification_rationale": "Matches plan's GA ecosystem lane for Zed/ACP."
},
{
"id": "CC2-ISSUE-3003",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3003",
"source_type": "github_issue",
"source_number": 3003,
"title": ".claude/sessions should not be submitted to repo",
"theme": "session-hygiene/gitignore",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"roadmap_anchor": "ROADMAP.md#9-green-ness-contract; ROADMAP.md#8-recovery-recipes-for-common-failures",
"dependencies": ["artifact ignore policy", "session storage boundary docs", "repo hygiene check"],
"verification_required": ["session directories are ignored", "status/doctor warns about tracked session artifacts"],
"deferral_rationale": null,
"classification_rationale": "Small but user-visible session hygiene and data-leak prevention item."
},
{
"id": "CC2-ISSUE-2997",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/2997",
"source_type": "github_issue",
"source_number": 2997,
"title": "License?",
"theme": "docs/license",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"roadmap_anchor": "ROADMAP.md#immediate-backlog-from-current-real-pain",
"dependencies": ["maintainer license decision", "LICENSE file", "README/USAGE attribution wording"],
"verification_required": ["repository license file exists", "package metadata and docs reference the same license"],
"deferral_rationale": null,
"classification_rationale": "Adoption/readiness documentation gap; requires maintainer decision before implementation."
},
{
"id": "CC2-ISSUE-3023",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3023",
"source_type": "github_issue",
"source_number": 3023,
"title": "Protect claw-code from AI slop PRs",
"theme": "repo-hygiene/anti-slop",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"roadmap_anchor": "ROADMAP.md#immediate-backlog-from-current-real-pain",
"dependencies": ["contributor policy", "PR quality gate selection", "false-positive review escape hatch"],
"verification_required": ["selected PR quality gate runs on sample good/bad PR fixtures", "maintainers can override false positives"],
"deferral_rationale": null,
"classification_rationale": "Protects project throughput but should not precede alpha core safety contracts."
},
{
"id": "CC2-ISSUE-3028",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3028",
"source_type": "github_issue",
"source_number": 3028,
"title": "docs: add navigation and file-context usage guide",
"theme": "docs/navigation-context",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"roadmap_anchor": "ROADMAP.md#7-human-ux-still-leaks-into-claw-workflows",
"dependencies": ["current TUI/shell key behavior inventory", "file context syntax docs", "secret-handling guidance"],
"verification_required": ["docs include terminal history, scrollback, @file context, attach/external file caveats", "examples work against current CLI"],
"deferral_rationale": null,
"classification_rationale": "Documentation support item from latest open issue refresh."
},
{
"id": "CC2-ISSUE-3029",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3029",
"source_type": "github_issue",
"source_number": 3029,
"title": "build: add cross-platform installer path and release artifact quickstart",
"theme": "install/distribution",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"roadmap_anchor": "ROADMAP.md#immediate-backlog-from-current-real-pain",
"dependencies": ["release artifact policy", "install.sh/install.ps1 contract", "PATH/update/uninstall instructions"],
"verification_required": ["install quickstart smoke on supported OS/arch", "failed install prints actionable diagnostics"],
"deferral_rationale": null,
"classification_rationale": "Distribution friction belongs in adoption overlay."
},
{
"id": "CC2-ISSUE-3030",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3030",
"source_type": "github_issue",
"source_number": 3030,
"title": "feat: make provider/model setup less env-var-driven",
"theme": "provider/setup-profiles",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"roadmap_anchor": "ROADMAP.md#3-structured-session-control-api; ROADMAP.md#145-boot-preflight-doctor-contract",
"dependencies": ["provider profiles", "setup wizard or dry-run", "secret redaction", "base-url/model smoke test"],
"verification_required": ["setup validates provider route without echoing keys", "session-only versus persisted profile behavior is explicit"],
"deferral_rationale": null,
"classification_rationale": "Directly reduces current provider setup support churn."
},
{
"id": "CC2-ISSUE-3031",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3031",
"source_type": "github_issue",
"source_number": 3031,
"title": "feat: auto-compact or clearly recover from context-window provider errors",
"theme": "session-recovery/context-window",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"roadmap_anchor": "ROADMAP.md#8-recovery-recipes-for-common-failures; ROADMAP.md#158-compact_messages_if_needed-drops-turns-silently-no-structured-compaction-event-emitted",
"dependencies": ["provider error classifier", "safe compact retry policy", "compaction event/audit trail", "retry loop cap"],
"verification_required": ["context-window error either compacts+retries once safely or emits exact recovery command", "compaction event is machine-visible"],
"deferral_rationale": null,
"classification_rationale": "Recovery reliability item; promoted only if selected alpha provider path hits it."
},
{
"id": "CC2-ISSUE-3032",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3032",
"source_type": "github_issue",
"source_number": 3032,
"title": "docs: add OpenAI-compatible/local provider diagnostics playbook",
"theme": "provider/diagnostics-docs",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"roadmap_anchor": "ROADMAP.md#5-failure-taxonomy",
"dependencies": ["raw chat-completions smoke tests", "tool-call response-shape examples", "provider failure taxonomy"],
"verification_required": ["playbook distinguishes Claw bugs from wrapper/tool-call-shape bugs", "curl examples cover non-streaming and streaming tool calls"],
"deferral_rationale": null,
"classification_rationale": "Shared diagnostic lane for #3005/#3020/local model reports."
},
{
"id": "CC2-ISSUE-3033",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3033",
"source_type": "github_issue",
"source_number": 3033,
"title": "feat: add minimal claw serve JSON-RPC engine API",
"theme": "engine-api/control-plane",
"release_bucket": "ga_ecosystem",
"lifecycle_status": "deferred_with_rationale",
"roadmap_anchor": "ROADMAP.md#3-structured-session-control-api; ROADMAP.md#phase-4-claws-first-task-execution",
"dependencies": ["stable session state API", "event schema v1", "permission policy contract", "cancel/prompt stream semantics"],
"verification_required": ["protocol conformance fixtures for session/create prompt/stream cancel error", "capability negotiation backwards compatibility"],
"deferral_rationale": "Engine API should expose, not invent, stable core control-plane semantics after alpha contracts land.",
"classification_rationale": "Useful integration surface but too broad for alpha unless narrowed to existing session control API."
},
{
"id": "CC2-ISSUE-3034",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3034",
"source_type": "github_issue",
"source_number": 3034,
"title": "docs: define evidence-gated Hermes handoff loop for Claw Code execution",
"theme": "sdlc/evidence-handoff",
"release_bucket": "post_2_0_research",
"lifecycle_status": "deferred_with_rationale",
"roadmap_anchor": "ROADMAP.md#4-canonical-lane-event-schema; ROADMAP.md#10-typed-task-packet-format",
"dependencies": ["typed task packet", "evidence bundle schema", "report gate status vocabulary"],
"verification_required": ["handoff packet fixture validates scope/success/test evidence fields", "post-flight gate consumes evidence instead of free-text summary"],
"deferral_rationale": "Can inform event/report/task contracts, but Hermes-specific loop should stay research/docs until core schemas are stable.",
"classification_rationale": "Only the generic evidence-gated contract is Claw 2.0; Hermes branding is not core."
},
{
"id": "CC2-ISSUE-3035",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3035",
"source_type": "github_issue",
"source_number": 3035,
"title": "fix: improve compacted session resume discoverability",
"theme": "session-resume/discoverability",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"roadmap_anchor": "ROADMAP.md#8-recovery-recipes-for-common-failures; ROADMAP.md#160-session_store-has-no-list_sessions-delete_session-or-session_exists",
"dependencies": ["session enumeration", "latest-session workspace search boundary", "compacted session marker"],
"verification_required": ["/resume latest finds newest eligible compacted session", "/session or status lists resumable compacted sessions with path/id"],
"deferral_rationale": null,
"classification_rationale": "Session recovery/adoption item."
},
{
"id": "CC2-ISSUE-3036",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3036",
"source_type": "github_issue",
"source_number": 3036,
"title": "docs: add official Ollama/llama.cpp/vLLM local model examples",
"theme": "provider/local-docs",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"roadmap_anchor": "ROADMAP.md#145-boot-preflight-doctor-contract; ROADMAP.md#5-failure-taxonomy",
"dependencies": ["known-good local provider examples", "raw /v1 smoke test", "tool-call limitation warning"],
"verification_required": ["docs include Ollama/llama.cpp/vLLM examples and HELLO smoke", "tool-call caveats are explicit"],
"deferral_rationale": null,
"classification_rationale": "Local provider adoption support."
},
{
"id": "CC2-ISSUE-3037",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3037",
"source_type": "github_issue",
"source_number": 3037,
"title": "docs: clarify Claw Code positioning as multi-provider Claude-Code-shaped runtime",
"theme": "docs/product-positioning",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"roadmap_anchor": "ROADMAP.md#goal; ROADMAP.md#definition-of-clawable",
"dependencies": ["README positioning copy", "provider support truth table", "identity leak bug policy"],
"verification_required": ["README/docs answer Claude-only question directly", "provider support wording matches implemented routes"],
"deferral_rationale": null,
"classification_rationale": "Clarifies product identity for adoption without broad implementation."
},
{
"id": "CC2-ISSUE-3038",
"source_anchor": "https://github.com/ultraworkers/claw-code/issues/3038",
"source_type": "github_issue",
"source_number": 3038,
"title": "roadmap: track skills/plugins/marketplace ecosystem gap after core UX stabilizes",
"theme": "plugin-marketplace/ecosystem",
"release_bucket": "ga_ecosystem",
"lifecycle_status": "deferred_with_rationale",
"roadmap_anchor": "ROADMAP.md#13-first-class-pluginmcp-lifecycle-contract; ROADMAP.md#14-mcp-end-to-end-lifecycle-parity",
"dependencies": ["plugin/MCP lifecycle contract", "extension point inventory", "discovery/install/update flow design"],
"verification_required": ["extension point inventory exists", "marketplace work explicitly depends on core UX stabilization"],
"deferral_rationale": "Marketplace breadth should wait until core setup/auth/provider/session UX and plugin lifecycle are reliable.",
"classification_rationale": "Matches plan's ga_ecosystem/post-2.0 caution for marketplace parity."
}
],
"parity_rows": [
{
"id": "CC2-PARITY-OPENCODE-PLUGIN-ECOSYSTEM",
"source_anchor": "anomalyco/opencode@27ac53aa packages/app/web/desktop/plugin/sdk/extensions/zed/slack/containers plus issue #3038",
"source_type": "repo_clone_and_local_issue",
"title": "Plugin/skills/marketplace ecosystem inventory",
"release_bucket": "ga_ecosystem",
"lifecycle_status": "deferred_with_rationale",
"dependencies": ["Claw plugin/MCP lifecycle contract", "current extension-point inventory"],
"verification_required": ["inventory maps current Claw plugin/skill/MCP extension points before marketplace implementation"],
"deferral_rationale": "Adapt ecosystem discovery only after core setup/provider/session reliability is stable."
},
{
"id": "CC2-PARITY-OPENCODE-PERMISSION-PRESETS",
"source_anchor": "https://github.com/anomalyco/opencode/issues/27464 and ROADMAP.md#11-policy-engine-for-autonomous-coding",
"source_type": "external_issue_and_roadmap",
"title": "Quick permission preset switching mapped onto Claw policy profiles",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"dependencies": ["policy profile model", "approval-token audit trail"],
"verification_required": ["preset switch is visible in status/report output and cannot bypass path-scope enforcement"],
"deferral_rationale": null
},
{
"id": "CC2-PARITY-OPENCODE-CUSTOM-PROVIDER-PARAMS",
"source_anchor": "https://github.com/anomalyco/opencode/issues/27462 and #3030/#3032",
"source_type": "external_issue_and_local_issue",
"title": "Custom API parameter passthrough for provider profiles",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"dependencies": ["provider profile schema", "secret redaction", "request audit surface"],
"verification_required": ["custom params are schema-validated, redacted, and visible as provenance without leaking secrets"],
"deferral_rationale": null
},
{
"id": "CC2-PARITY-OPENCODE-TODOWRITE-AUTOCOMPLETE",
"source_anchor": "https://github.com/anomalyco/opencode/issues/27453 and ROADMAP.md#10-typed-task-packet-format",
"source_type": "external_issue_and_roadmap",
"title": "Task/Todo completion assistance via typed task lifecycle",
"release_bucket": "ga_ecosystem",
"lifecycle_status": "deferred_with_rationale",
"dependencies": ["typed task packet", "task lifecycle events", "evidence-gated completion"],
"verification_required": ["auto-complete suggestions cannot mark work complete without evidence bundle or explicit user approval"],
"deferral_rationale": "Useful UX should follow, not precede, typed task lifecycle and evidence contract."
},
{
"id": "CC2-PARITY-OPENCODE-WINDOWS-DISTRIBUTION",
"source_anchor": "https://github.com/anomalyco/opencode/issues/27476 https://github.com/anomalyco/opencode/issues/27459 https://github.com/anomalyco/opencode/issues/27470 and #3006/#3029",
"source_type": "external_issues_and_local_issues",
"title": "Windows/GLIBC/distribution reliability parity lessons",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"dependencies": ["install artifact matrix", "Windows encoding guidance", "minimum Linux/GLIBC support statement"],
"verification_required": ["release quickstart documents supported OS matrix and known terminal/encoding caveats"],
"deferral_rationale": null
},
{
"id": "CC2-PARITY-CODEX-GRANULAR-PERMISSIONS",
"source_anchor": "https://github.com/openai/codex/issues/22595 and Codex docs permissions/app/plugin concepts",
"source_type": "external_issue_and_docs",
"title": "Granular app/plugin permissions adapted to Claw policy engine",
"release_bucket": "alpha_blocker",
"lifecycle_status": "active",
"dependencies": ["permission enforcer path-scope fix", "plugin/MCP capability model", "approval-token replay protection"],
"verification_required": ["granular permission grants do not widen workspace path scope implicitly"],
"deferral_rationale": null
},
{
"id": "CC2-PARITY-CODEX-SESSION-RECOVERY",
"source_anchor": "https://github.com/openai/codex/issues/22619 https://github.com/openai/codex/issues/22597 https://github.com/openai/codex/issues/22593 and #3035",
"source_type": "external_issues_and_local_issue",
"title": "Safe local session/thread recovery without storage amplification",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"dependencies": ["session enumeration", "resume latest boundary", "JSONL/storage compaction policy"],
"verification_required": ["recoverable sessions are discoverable and session forks avoid unbounded duplicate history"],
"deferral_rationale": null
},
{
"id": "CC2-PARITY-CODEX-PROXY-NETWORK",
"source_anchor": "https://github.com/openai/codex/issues/22623 and #3032",
"source_type": "external_issue_and_local_issue",
"title": "Provider/network diagnostics include proxy behavior",
"release_bucket": "beta_adoption",
"lifecycle_status": "open",
"dependencies": ["HTTP client proxy detection", "provider diagnostics playbook"],
"verification_required": ["diagnostics report whether proxy env/config is honored for provider calls"],
"deferral_rationale": null
},
{
"id": "CC2-PARITY-CODEX-CLI-AGENT-FLAG",
"source_anchor": "https://github.com/openai/codex/issues/22615 and ROADMAP.md#10-typed-task-packet-format",
"source_type": "external_issue_and_roadmap",
"title": "CLI flag for agent/subagent mode mapped to Claw typed task packets",
"release_bucket": "ga_ecosystem",
"lifecycle_status": "deferred_with_rationale",
"dependencies": ["typed task packet", "session control API", "policy-scoped worker launch"],
"verification_required": ["CLI agent mode cannot bypass task policy or evidence requirements"],
"deferral_rationale": "Implement only after core task/session control contracts are stable."
}
],
"coverage": {
"required_latest_open_range_3028_3038": [3028, 3029, 3030, 3031, 3032, 3033, 3034, 3035, 3036, 3037, 3038],
"required_existing_issue_numbers": [3007, 3006, 3020, 3005, 3003, 2997, 3023, 3004],
"issue_rows_expected": 19,
"parity_rows_expected_minimum": 6
}
}

View File

@@ -0,0 +1,47 @@
# CC2 Issue / Parity Intake Mapping
Generated by `worker-2` for team task 3 (`G001 issue/parity intake mapping`). This is a board-integration fragment for Stream 0; it intentionally does **not** mutate `.omx/ultragoal`.
## Covered local issue clusters
| Issue | Theme | Bucket | Lifecycle | Board anchor |
|---:|---|---|---|---|
| #3007 | security/path-scope | `alpha_blocker` | `active` | Policy engine + green-ness contract |
| #3020 | provider/model-routing | `beta_adoption` | `open` | Provider routing/model source status |
| #3006 | windows/install | `beta_adoption` | `open` | Immediate backlog / install readiness |
| #3005 | provider/response-shape | `beta_adoption` | `open` | Failure taxonomy / provider diagnostics |
| #3004 | ide/acp | `ga_ecosystem` | `deferred_with_rationale` | Plugin/MCP lifecycle maturity |
| #3003 | session-hygiene/gitignore | `beta_adoption` | `open` | Green-ness / recovery hygiene |
| #2997 | docs/license | `beta_adoption` | `open` | Adoption docs/license readiness |
| #3023 | repo-hygiene/anti-slop | `beta_adoption` | `open` | Immediate backlog / PR quality gate |
| #3028 | docs/navigation-context | `beta_adoption` | `open` | Human UX leaks into claw workflows |
| #3029 | install/distribution | `beta_adoption` | `open` | Cross-platform release quickstart |
| #3030 | provider/setup-profiles | `beta_adoption` | `open` | Boot preflight / structured session control |
| #3031 | session-recovery/context-window | `beta_adoption` | `open` | Recovery recipes / compaction event |
| #3032 | provider/diagnostics-docs | `beta_adoption` | `open` | Failure taxonomy |
| #3033 | engine-api/control-plane | `ga_ecosystem` | `deferred_with_rationale` | Structured session control API |
| #3034 | sdlc/evidence-handoff | `post_2_0_research` | `deferred_with_rationale` | Event/report/task contract input |
| #3035 | session-resume/discoverability | `beta_adoption` | `open` | Recovery recipes / session enumeration |
| #3036 | provider/local-docs | `beta_adoption` | `open` | Provider setup and diagnostics docs |
| #3037 | docs/product-positioning | `beta_adoption` | `open` | Goal / definition of clawable |
| #3038 | plugin-marketplace/ecosystem | `ga_ecosystem` | `deferred_with_rationale` | Plugin/MCP lifecycle maturity |
## Parity intake rows
| Row | Source | Bucket | Lifecycle | Adaptation rule |
|---|---|---|---|---|
| `CC2-PARITY-OPENCODE-PLUGIN-ECOSYSTEM` | opencode repo + #3038 | `ga_ecosystem` | `deferred_with_rationale` | Inventory Claw extension points before marketplace work. |
| `CC2-PARITY-OPENCODE-PERMISSION-PRESETS` | opencode #27464 | `beta_adoption` | `open` | Permission preset UX must not bypass Claw path-scope policy. |
| `CC2-PARITY-OPENCODE-CUSTOM-PROVIDER-PARAMS` | opencode #27462 + #3030/#3032 | `beta_adoption` | `open` | Custom provider params need schema validation, redaction, and provenance. |
| `CC2-PARITY-OPENCODE-TODOWRITE-AUTOCOMPLETE` | opencode #27453 | `ga_ecosystem` | `deferred_with_rationale` | Auto-complete task UX follows typed task lifecycle/evidence gates. |
| `CC2-PARITY-OPENCODE-WINDOWS-DISTRIBUTION` | opencode #27476/#27459/#27470 + #3006/#3029 | `beta_adoption` | `open` | Use external pain as release-matrix and diagnostics evidence. |
| `CC2-PARITY-CODEX-GRANULAR-PERMISSIONS` | Codex #22595 + docs | `alpha_blocker` | `active` | Adapt granular permissions only through Claw policy engine and approval tokens. |
| `CC2-PARITY-CODEX-SESSION-RECOVERY` | Codex #22619/#22597/#22593 + #3035 | `beta_adoption` | `open` | Session discovery/recovery must avoid storage amplification. |
| `CC2-PARITY-CODEX-PROXY-NETWORK` | Codex #22623 + #3032 | `beta_adoption` | `open` | Provider diagnostics should expose proxy behavior. |
| `CC2-PARITY-CODEX-CLI-AGENT-FLAG` | Codex #22615 | `ga_ecosystem` | `deferred_with_rationale` | CLI agent mode waits for typed task/session control contracts. |
Validation command:
```bash
python3 .omx/cc2/validate_issue_parity_intake.py
```

250
.omx/cc2/render_board_md.py Executable file
View File

@@ -0,0 +1,250 @@
#!/usr/bin/env python3
"""Render the Claw Code 2.0 canonical board JSON as a human-readable Markdown board."""
from __future__ import annotations
import argparse
import json
import sys
from collections import Counter, defaultdict
from pathlib import Path
from typing import Any
STATUS_DESCRIPTIONS = {
"context": "Context-only heading or evidence anchor; not an implementation work item.",
"active": "Current Claw Code 2.0 implementation surface that should remain visible on the board.",
"open": "Actionable unresolved work that needs implementation or acceptance evidence.",
"done_verify": "Marked as done upstream but retained for verification against current CC2 behavior.",
"stale_done": "Historically completed or merged work that may be stale and needs freshness checks before relying on it.",
"superseded": "Replaced by a newer item; keep as traceability context only.",
"deferred_with_rationale": "Intentionally deferred; rationale must be present in the board item.",
"rejected_not_claw": "Excluded because it is not Claw Code product work.",
}
BUCKET_DESCRIPTIONS = {
"alpha_blocker": "Must be resolved before alpha-quality autonomous coding lanes are dependable.",
"beta_adoption": "Important for broader dogfood/adoption once alpha blockers are controlled.",
"ga_ecosystem": "Required for mature plugin/MCP/provider ecosystem behavior.",
"2.x_intake": "Post-2.0 intake or follow-up candidate retained for sequencing.",
"post_2_0_research": "Research-oriented item not required for the CC2 board cut.",
"context": "Non-actionable roadmap context.",
"rejected_not_claw": "Explicit non-Claw rejection bucket.",
}
LANE_TITLES = {
"stream_0_governance": "Stream 0 — Governance, intake, and cross-cutting roadmap triage",
"stream_1_worker_boot_session_control": "Stream 1 — Worker boot and session control",
"stream_2_event_reporting_contracts": "Stream 2 — Event/reporting contracts",
"stream_3_branch_test_recovery": "Stream 3 — Branch/test recovery",
"stream_4_claws_first_execution": "Stream 4 — Claws-first task execution",
"stream_5_plugin_mcp_lifecycle": "Stream 5 — Plugin/MCP lifecycle",
"adoption_overlay": "Adoption overlay — user-visible parity and release polish",
"parity_overlay": "Parity overlay — opencode/codex comparison context",
}
REQUIRED_ITEM_FIELDS = [
"id",
"title",
"source_anchor",
"source_type",
"release_bucket",
"lifecycle_status",
"dependencies",
"verification_required",
"deferral_rationale",
]
def load_board(path: Path) -> dict[str, Any]:
with path.open() as f:
board = json.load(f)
if not isinstance(board, dict):
raise ValueError("board JSON root must be an object")
items = board.get("items")
if not isinstance(items, list):
raise ValueError("board JSON must contain an items array")
return board
def validate_board(board: dict[str, Any]) -> list[str]:
errors: list[str] = []
coverage = board.get("coverage", {})
if coverage.get("unmapped_roadmap_heading_lines"):
errors.append(f"unmapped roadmap heading lines: {coverage['unmapped_roadmap_heading_lines']}")
if coverage.get("roadmap_headings_mapped") != coverage.get("roadmap_headings_total"):
errors.append("roadmap heading coverage is incomplete")
if coverage.get("roadmap_actions_mapped") != coverage.get("roadmap_actions_total"):
errors.append("roadmap ordered-action coverage is incomplete")
allowed_status = set(board.get("generation_policy", {}).get("status_values", []))
allowed_buckets = set(board.get("generation_policy", {}).get("release_buckets", []))
seen_ids: set[str] = set()
for index, item in enumerate(board["items"], 1):
for field in REQUIRED_ITEM_FIELDS:
if field not in item:
errors.append(f"item {index} missing required field {field}")
item_id = item.get("id")
if item_id in seen_ids:
errors.append(f"duplicate item id {item_id}")
seen_ids.add(item_id)
status = item.get("lifecycle_status")
bucket = item.get("release_bucket")
if allowed_status and status not in allowed_status:
errors.append(f"{item_id} has unknown lifecycle_status {status!r}")
if allowed_buckets and bucket not in allowed_buckets:
errors.append(f"{item_id} has unknown release_bucket {bucket!r}")
if status == "deferred_with_rationale" and not str(item.get("deferral_rationale", "")).strip():
errors.append(f"{item_id} is deferred without deferral_rationale")
return errors
def table(headers: list[str], rows: list[list[Any]]) -> list[str]:
out = ["| " + " | ".join(headers) + " |", "| " + " | ".join("---" for _ in headers) + " |"]
for row in rows:
out.append("| " + " | ".join(str(cell) for cell in row) + " |")
return out
def fmt_list(value: Any) -> str:
if not value:
return "none"
if isinstance(value, list):
return ", ".join(f"`{v}`" for v in value) if value else "none"
return f"`{value}`"
def render(board: dict[str, Any]) -> str:
items: list[dict[str, Any]] = board["items"]
summary = board.get("summary", {})
coverage = board.get("coverage", {})
sources = board.get("sources", {})
policy = board.get("generation_policy", {})
by_lane = Counter(item.get("owner_lane", "unassigned") for item in items)
by_status = Counter(item.get("lifecycle_status", "unknown") for item in items)
by_bucket = Counter(item.get("release_bucket", "unknown") for item in items)
by_source = Counter(item.get("source_type", "unknown") for item in items)
lines: list[str] = []
lines.append("# Claw Code 2.0 Canonical Board")
lines.append("")
lines.append(f"Generated from board schema: `{board.get('generated_at', 'unknown')}`")
lines.append(f"Schema version: `{board.get('schema_version', 'unknown')}`")
lines.append("Ultragoal mutation policy: `.omx/ultragoal` is leader-owned and was not modified by this rendering task.")
lines.append("")
lines.append("## Evidence Freeze")
lines.append("")
roadmap = sources.get("roadmap", {})
research = sources.get("research", {})
plan = sources.get("approved_plan", {})
lines.extend(table(["Source", "Frozen evidence"], [
["Roadmap", f"`{roadmap.get('path', 'ROADMAP.md')}` sha256 prefix `{roadmap.get('sha256_prefix', 'unknown')}`; {roadmap.get('heading_count', '?')} headings; {roadmap.get('ordered_action_count', '?')} ordered actions"],
["Approved plan", f"`{plan.get('path', '.omx/plans/claw-code-2-0-adaptive-plan.md')}` sha256 prefix `{plan.get('sha256_prefix', 'unknown')}`"],
["Research bundle", f"root `{research.get('root', '.omx/research')}`; latest open issues {research.get('claw_open_latest_count', '?')}; issue corpus {research.get('claw_issues_count', '?')}; codex/opencode clone metadata included"],
]))
lines.append("")
lines.append("## Roadmap Coverage Summary")
lines.append("")
heading_total = coverage.get("roadmap_headings_total", 0)
heading_mapped = coverage.get("roadmap_headings_mapped", 0)
action_total = coverage.get("roadmap_actions_total", 0)
action_mapped = coverage.get("roadmap_actions_mapped", 0)
lines.extend(table(["Coverage gate", "Mapped", "Total", "Status"], [
["ROADMAP headings", heading_mapped, heading_total, "PASS" if heading_mapped == heading_total and not coverage.get("unmapped_roadmap_heading_lines") else "FAIL"],
["ROADMAP ordered actions", action_mapped, action_total, "PASS" if action_mapped == action_total else "FAIL"],
["Duplicate heading lines", len(coverage.get("duplicate_roadmap_heading_lines", [])), 0, "PASS" if not coverage.get("duplicate_roadmap_heading_lines") else "WARN"],
]))
lines.append("")
lines.append(f"Total canonical board items: **{len(items)}**")
lines.append("")
lines.append("## Lifecycle Enum Reference")
lines.append("")
status_rows = []
for status in policy.get("status_values", sorted(by_status)):
status_rows.append([f"`{status}`", by_status.get(status, 0), STATUS_DESCRIPTIONS.get(status, "Board-defined lifecycle status.")])
lines.extend(table(["Lifecycle", "Count", "Meaning"], status_rows))
lines.append("")
lines.append("## Release Bucket Reference")
lines.append("")
bucket_rows = []
for bucket in policy.get("release_buckets", sorted(by_bucket)):
bucket_rows.append([f"`{bucket}`", by_bucket.get(bucket, 0), BUCKET_DESCRIPTIONS.get(bucket, "Board-defined release bucket.")])
lines.extend(table(["Bucket", "Count", "Meaning"], bucket_rows))
lines.append("")
lines.append("## Stream Summaries")
lines.append("")
lane_rows = []
for lane, count in sorted(by_lane.items()):
lane_items = [item for item in items if item.get("owner_lane") == lane]
lane_status = Counter(item.get("lifecycle_status") for item in lane_items)
open_like = lane_status.get("active", 0) + lane_status.get("open", 0) + lane_status.get("done_verify", 0)
lane_rows.append([
LANE_TITLES.get(lane, lane),
count,
open_like,
", ".join(f"`{k}` {v}" for k, v in sorted(lane_status.items())),
])
lines.extend(table(["Stream / lane", "Items", "Active+open+verify", "Lifecycle mix"], lane_rows))
lines.append("")
lines.append("## Source-Type Mix")
lines.append("")
lines.extend(table(["Source type", "Items"], [[f"`{k}`", v] for k, v in sorted(by_source.items())]))
lines.append("")
lines.append("## Board Items by Stream")
lines.append("")
for lane in sorted(by_lane):
lane_items = [item for item in items if item.get("owner_lane") == lane]
lines.append(f"### {LANE_TITLES.get(lane, lane)}")
lines.append("")
lines.extend(table(
["ID", "Title", "Source", "Bucket", "Lifecycle", "Verification", "Dependencies", "Deferral"],
[[
f"`{item.get('id')}`",
str(item.get("title", "")).replace("|", "\\|"),
f"`{item.get('source_anchor')}` / `{item.get('source_type')}`",
f"`{item.get('release_bucket')}`",
f"`{item.get('lifecycle_status')}`",
f"`{item.get('verification_required')}`",
fmt_list(item.get("dependencies")),
str(item.get("deferral_rationale") or "").replace("|", "\\|"),
] for item in lane_items]
))
lines.append("")
return "\n".join(lines).rstrip() + "\n"
def main() -> int:
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("board_json", type=Path)
parser.add_argument("board_md", type=Path)
parser.add_argument("--check", action="store_true", help="fail if board_md is not up to date")
args = parser.parse_args()
board = load_board(args.board_json)
errors = validate_board(board)
if errors:
for error in errors:
print(f"ERROR: {error}", file=sys.stderr)
return 1
rendered = render(board)
if args.check:
existing = args.board_md.read_text() if args.board_md.exists() else ""
if existing != rendered:
print(f"ERROR: {args.board_md} is not up to date", file=sys.stderr)
return 1
print(f"PASS: {args.board_md} is up to date and roadmap coverage is complete")
return 0
args.board_md.parent.mkdir(parents=True, exist_ok=True)
args.board_md.write_text(rendered)
print(f"wrote {args.board_md}")
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,58 @@
#!/usr/bin/env python3
"""Validate the worker-2 CC2 issue/parity intake fragment."""
from __future__ import annotations
import json
from pathlib import Path
ROOT = Path(__file__).resolve().parents[2]
INTAKE = ROOT / ".omx" / "cc2" / "issue-parity-intake.json"
REQUIRED_ISSUES = set(range(3028, 3039)) | {3007, 3006, 3020, 3005, 3003, 2997, 3023, 3004}
ALLOWED_STATUS = {
"context",
"active",
"open",
"done_verify",
"stale_done",
"superseded",
"deferred_with_rationale",
"rejected_not_claw",
}
ALLOWED_BUCKETS = {"alpha_blocker", "beta_adoption", "ga_ecosystem", "post_2_0_research"}
def require(condition: bool, message: str) -> None:
if not condition:
raise SystemExit(f"FAIL: {message}")
def main() -> None:
data = json.loads(INTAKE.read_text())
issue_rows = data.get("issue_clusters", [])
parity_rows = data.get("parity_rows", [])
seen = {row.get("source_number") for row in issue_rows}
missing = sorted(REQUIRED_ISSUES - seen)
extra = sorted(seen - REQUIRED_ISSUES)
require(not missing, f"missing required issue rows: {missing}")
require(not extra, f"unexpected issue rows in scoped intake: {extra}")
require(len(issue_rows) == len(REQUIRED_ISSUES), "duplicate or missing issue row count")
ids = [row.get("id") for row in issue_rows + parity_rows]
require(len(ids) == len(set(ids)), "duplicate ids present")
for row in issue_rows + parity_rows:
row_id = row.get("id")
for field in ["source_anchor", "source_type", "release_bucket", "lifecycle_status", "dependencies", "verification_required"]:
require(row.get(field) not in (None, "", []), f"{row_id} missing {field}")
require(row["release_bucket"] in ALLOWED_BUCKETS, f"{row_id} invalid release_bucket {row['release_bucket']}")
require(row["lifecycle_status"] in ALLOWED_STATUS, f"{row_id} invalid lifecycle_status {row['lifecycle_status']}")
if row["lifecycle_status"] == "deferred_with_rationale":
require(row.get("deferral_rationale"), f"{row_id} deferred without rationale")
require(len(parity_rows) >= data["coverage"]["parity_rows_expected_minimum"], "not enough parity rows")
print(f"PASS issue/parity intake: {len(issue_rows)} issue rows, {len(parity_rows)} parity rows")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,8 @@
{
"session_id": "b035f648d5b549aa836ea01f6727ec62",
"messages": [
"review MCP tool"
],
"input_tokens": 3,
"output_tokens": 13
}

View File

@@ -0,0 +1,9 @@
{
"session_id": "b234acb1eb8c486e80544ddc7e13e6d8",
"messages": [
"review MCP tool",
"review MCP tool"
],
"input_tokens": 6,
"output_tokens": 32
}

View File

@@ -0,0 +1,9 @@
{
"session_id": "b67e062748f04e10ac5770df9285e4bd",
"messages": [
"review MCP tool",
"review MCP tool"
],
"input_tokens": 6,
"output_tokens": 32
}

View File

@@ -0,0 +1,9 @@
{
"session_id": "bb88fd20433840a8b19237e3f306c6e3",
"messages": [
"review MCP tool",
"review MCP tool"
],
"input_tokens": 6,
"output_tokens": 32
}

View File

@@ -192,6 +192,7 @@ cargo test --workspace
- [`PARITY.md`](./PARITY.md) — parity status for the Rust port - [`PARITY.md`](./PARITY.md) — parity status for the Rust port
- [`rust/MOCK_PARITY_HARNESS.md`](./rust/MOCK_PARITY_HARNESS.md) — deterministic mock-service harness details - [`rust/MOCK_PARITY_HARNESS.md`](./rust/MOCK_PARITY_HARNESS.md) — deterministic mock-service harness details
- [`ROADMAP.md`](./ROADMAP.md) — active roadmap and open cleanup work - [`ROADMAP.md`](./ROADMAP.md) — active roadmap and open cleanup work
- [`docs/g004-events-reports-contract.md`](./docs/g004-events-reports-contract.md) — Stream 2 lane event/report contract guidance for consumers
- [`PHILOSOPHY.md`](./PHILOSOPHY.md) — why the project exists and how it is operated - [`PHILOSOPHY.md`](./PHILOSOPHY.md) — why the project exists and how it is operated
## Ecosystem ## Ecosystem

View File

@@ -6338,3 +6338,87 @@ Original filing (2026-04-18): the session emitted `SessionStart hook (completed)
420. **`plugins help --output-format json` returns the mutation response shape (`message`, `reload_runtime`, `target`) instead of the help envelope (`action:"help"`, `kind`, `unexpected`, `usage`) that `mcp help`, `agents help`, and `skills help` all use — schema drift within the same command family** — dogfooded 2026-05-01 by Jobdori on `e939777f`. Running `claw plugins help --output-format json` returns `{"action":"help","kind":"plugin","message":"Unknown /plugins action 'help'. Use list, install, enable, disable, uninstall, or update.","reload_runtime":false,"target":null}`. By contrast, `claw mcp help --output-format json`, `claw agents help --output-format json`, and `claw skills help --output-format json` all return a help envelope: `{"action":"help","kind":"<surface>","unexpected":null,"usage":{"direct_cli":"...","slash_command":"...","sources":[...]}}`. The `plugins` subgroup has not adopted the help envelope schema used by all sibling subgroups. Instead it uses the mutation response shape (`message`, `reload_runtime`, `target`) with an error string in `message` that calls `help` an "unknown action." Automation that checks `usage.direct_cli` to discover plugin commands gets a `TypeError` (key not found) on the plugins help path while succeeding on all sibling subgroups. **Required fix shape:** (a) make `plugins help` return the same help envelope as `mcp help`/`agents help`/`skills help`: `{action:"help", kind:"plugin", unexpected:null, usage:{direct_cli:"claw plugins [list|enable|disable|install|uninstall|update|help]", slash_command:"/plugins [...]", sources:[...]}`; (b) drop `reload_runtime` and `target` from help responses for all plugin subcommands; (c) add regression coverage proving `plugins help --output-format json` contains a `usage.direct_cli` field matching the same envelope shape as `mcp help`/`agents help`/`skills help`; (d) audit all subgroup `help` handlers for the same mutation-envelope contamination. **Why this matters:** help discovery is the bootstrap surface for automation. If `plugins help --output-format json` returns a mutation envelope with an error message instead of a usage envelope, automated schema discovery fails silently for the entire plugins subgroup while working for every other subgroup. Source: Jobdori live dogfood, `e939777f`, 2026-05-01. 420. **`plugins help --output-format json` returns the mutation response shape (`message`, `reload_runtime`, `target`) instead of the help envelope (`action:"help"`, `kind`, `unexpected`, `usage`) that `mcp help`, `agents help`, and `skills help` all use — schema drift within the same command family** — dogfooded 2026-05-01 by Jobdori on `e939777f`. Running `claw plugins help --output-format json` returns `{"action":"help","kind":"plugin","message":"Unknown /plugins action 'help'. Use list, install, enable, disable, uninstall, or update.","reload_runtime":false,"target":null}`. By contrast, `claw mcp help --output-format json`, `claw agents help --output-format json`, and `claw skills help --output-format json` all return a help envelope: `{"action":"help","kind":"<surface>","unexpected":null,"usage":{"direct_cli":"...","slash_command":"...","sources":[...]}}`. The `plugins` subgroup has not adopted the help envelope schema used by all sibling subgroups. Instead it uses the mutation response shape (`message`, `reload_runtime`, `target`) with an error string in `message` that calls `help` an "unknown action." Automation that checks `usage.direct_cli` to discover plugin commands gets a `TypeError` (key not found) on the plugins help path while succeeding on all sibling subgroups. **Required fix shape:** (a) make `plugins help` return the same help envelope as `mcp help`/`agents help`/`skills help`: `{action:"help", kind:"plugin", unexpected:null, usage:{direct_cli:"claw plugins [list|enable|disable|install|uninstall|update|help]", slash_command:"/plugins [...]", sources:[...]}`; (b) drop `reload_runtime` and `target` from help responses for all plugin subcommands; (c) add regression coverage proving `plugins help --output-format json` contains a `usage.direct_cli` field matching the same envelope shape as `mcp help`/`agents help`/`skills help`; (d) audit all subgroup `help` handlers for the same mutation-envelope contamination. **Why this matters:** help discovery is the bootstrap surface for automation. If `plugins help --output-format json` returns a mutation envelope with an error message instead of a usage envelope, automated schema discovery fails silently for the entire plugins subgroup while working for every other subgroup. Source: Jobdori live dogfood, `e939777f`, 2026-05-01.
421. **`status`, `mcp list`, `doctor` JSON output leak macOS `/private` symlink-canonicalized cwd instead of user-invocation cwd — automation that string-matches on cwd breaks across symlinked filesystems** — dogfooded 2026-05-11 by Jobdori on `b98b9a71` in response to Clawhip pinpoint nudge at `1503207549447573574`. Reproduction on macOS: invoke from `/tmp/claw-dog-cwd` (where `/tmp` symlinks to `/private/tmp`), then `claw status --output-format json` returns `workspace.cwd: "/private/tmp/claw-dog-cwd"`, `claw mcp list --output-format json` returns `working_directory: "/private/tmp/claw-dog-cwd"`. The user's invocation cwd (`$PWD`, `pwd`) is `/tmp/claw-dog-cwd`. Source: `session_control.rs:34` calls `fs::canonicalize(cwd)` for #151 cross-worktree session-bleed prevention, then leaks the canonicalized path through every JSON envelope that reports cwd. **Required fix shape:** (a) keep canonicalized cwd for session keying internally, but report user-input cwd (the value passed by `env::current_dir()` or `--cwd` flag) in JSON output as `cwd`; (b) optionally expose canonical path as a separate field `cwd_canonical` for diagnostic purposes; (c) audit every `--output-format json` surface that emits `cwd` / `working_directory` / `workspace.cwd` for the same leak (status, mcp list, doctor, session list, init, etc.); (d) add regression coverage proving JSON cwd matches `$PWD` on macOS where `/tmp -> /private/tmp` symlink exists. **Why this matters:** automation pipelines that route work to lanes by cwd, or that compare cwd against a registry, break across macOS hosts because the canonicalized form differs from the form the user/orchestrator passed. The leak is silent — no documentation indicates the path will be rewritten. Source: Jobdori live dogfood, `b98b9a71`, 2026-05-11.
422. **Unknown top-level subcommands fall through to chat prompt path instead of returning `unknown_subcommand` error — typos silently send the subcommand string as a chat message to the configured LLM** — dogfooded 2026-05-11 by Jobdori on `b98b9a71` in response to Clawhip pinpoint nudge at `1503215095088676956`. Reproduction: `unset ANTHROPIC_AUTH_TOKEN; export ANTHROPIC_API_KEY=fake-key-for-routing-test; claw completely-bogus-subcommand --output-format json` returns `{"error":"api returned 401 Unauthorized (authentication_error) [trace req_011...]: invalid x-api-key","kind":"api_http_error"}` — proving the unknown token reached the Anthropic API endpoint as a chat prompt. With valid credentials, the bogus subcommand string would be silently consumed as a chat message, billing the user for a typo and producing whatever continuation the LLM generates. **Pre-error path:** `claw <unknown> --output-format json` with no creds returns `kind:"missing_credentials"` (the auth gate fires first), masking the routing bug. Only with creds present does the fallthrough manifest as the actual prompt being sent. **Sibling exit-code bug:** when the chat-path 401 returns, the JSON envelope is `kind:"api_http_error"` but exit code is **0**, while `cli_parse` errors (e.g. `--no-such-flag`) and `missing_credentials` errors correctly exit **1**. Exit-code parity between error envelopes is broken — automation that gates on `$?` will treat the 401-as-chat as success. **Required fix shape:** (a) reserve unknown top-level tokens that match no registered subcommand and emit `kind:"unknown_subcommand"` with `unknown:<token>` field and exit code 1, BEFORE the chat fallback path; (b) when a token is intended as a chat prompt, require an explicit verb (`prompt`, `chat`, `ask`) or `--prompt` flag; (c) ensure exit codes are non-zero for all `kind:*_error` envelopes; (d) regression test: `claw <bogus> --output-format json` with valid auth returns `kind:"unknown_subcommand"` exit 1, never reaches the API. **Why this matters:** automation that calls `claw <subcommand>` with a programmatically constructed verb (typo, version drift, refactored command) silently bills tokens and produces hallucinated output instead of a typed error. Cross-cluster with #108 (CLI fallthrough discovered earlier) — #422 is the post-#108 audit confirming the routing bug still bites with valid credentials. Source: Jobdori live dogfood, `b98b9a71`, 2026-05-11.
423. **`claw prompt` does not read prompt text from stdin when no positional prompt arg is provided — `echo "what is 2+2" | claw prompt --output-format json` returns `kind:"unknown" error:"prompt subcommand requires a prompt string"` instead of consuming stdin** — dogfooded 2026-05-11 by Jobdori on `3c563fa1` in response to Clawhip pinpoint nudge at `1503222644739276951`. Reproduction: `echo "what is 2+2" | claw prompt --output-format json``{"error":"prompt subcommand requires a prompt string","hint":null,"kind":"unknown","type":"error"}` exit 1. Same for `claw prompt --output-format json` with stdin redirected from a file. The most common Unix automation pattern (`cmd | claw prompt`) is broken because the prompt subcommand only reads the positional argument, never falls through to stdin. **Sibling envelope-kind bug:** the error `kind` is `"unknown"` instead of a typed `"missing_argument"` or `"validation_error"`. The `unknown` discriminator is the catch-all bucket — automation that switches on `kind` to differentiate input-validation errors from runtime errors gets no signal here. **Required fix shape:** (a) when `prompt` subcommand has no positional prompt arg AND stdin is not a TTY (i.e., piped or redirected), read stdin to EOF and use that as the prompt; (b) emit `kind:"missing_argument"` (not `"unknown"`) when both positional arg and stdin are absent; (c) add `--prompt-stdin` or `--stdin` opt-in flag for explicit control; (d) regression tests: `echo X | claw prompt --output-format json` reaches the runtime with prompt=X, AND `claw prompt < /dev/null` returns `kind:"missing_argument"` exit 1. **Why this matters:** Unix pipelines are the foundation of CLI automation. Every other major CLI (curl, jq, gh, kubectl) accepts stdin as the primary input when no positional arg is given. Breaking this convention forces automation to either inline the prompt as a shell-quoted string (escaping nightmare for multiline/code) or write to a temp file first. The `kind:"unknown"` error category compounds the problem by making the failure indistinguishable from a runtime crash. Source: Jobdori live dogfood, `3c563fa1`, 2026-05-11.
424. **`--model` rejects bare canonical Anthropic model names (`claude-opus-4-7`, `claude-opus-4-6`, `claude-sonnet-4-6`) as `invalid_model_syntax` — only short aliases (`opus`, `sonnet`, `haiku`) and full prefixed form (`anthropic/claude-opus-4-7`) work; sibling: error message stale-suggests `claude-opus-4-6` not `4-7`** — dogfooded 2026-05-11 by Jobdori on `6c0c305a` in response to Clawhip pinpoint nudge at `1503230194889134103`. Reproduction: `claw --model claude-opus-4-7 status --output-format json``{"error":"invalid model syntax: 'claude-opus-4-7'. Expected provider/model (e.g., anthropic/claude-opus-4-6) or known alias (opus, sonnet, haiku)","kind":"invalid_model_syntax"}`. Same for `claude-opus-4-6`, `claude-sonnet-4-6`. Forcing `--model anthropic/claude-opus-4-7` works (`model:"anthropic/claude-opus-4-7"`, `model_source:"flag"`). Three problems compounded: (a) Anthropic-canonical model names without provider prefix are rejected even though the `claude-` prefix unambiguously identifies the provider; (b) the error suggests `anthropic/claude-opus-4-6` as the example — `4-7` shipped 2026-04-16 and is the current production Anthropic frontier model, the suggestion is one model behind; (c) the alias list `opus, sonnet, haiku` doesn't disambiguate version (which `opus` does the alias resolve to — `opus-4-6` or `opus-4-7`?). **Required fix shape:** (a) accept bare `claude-*` and `gpt-*` model names as canonical-named-without-prefix and route via name-prefix detection (already implemented for prefix-routed mode); (b) update the example in `invalid_model_syntax` error to current frontier (`anthropic/claude-opus-4-7`); (c) document or expose `opus` → exact-version mapping in the error message and in `claw doctor`/`status` output (`model_alias_resolved_to: "claude-opus-4-7"`); (d) regression test: `claw --model claude-opus-4-7 status --output-format json` returns `model_source:"flag"`, not `kind:"invalid_model_syntax"`. **Sibling bug observed in same probe:** `enabledPlugins` deprecation warning repeats 3 times in stderr for the same `~/.claw/settings.json` load — config file is being loaded/parsed 3 times during a single `status` invocation. **Why this matters:** every Anthropic doc, every CCAPI route, every internal tooling references models by their bare canonical name (`claude-opus-4-7`). Forcing the `anthropic/` prefix breaks copy-paste from Anthropic's own examples and adds a redundant token to every invocation. The stale `4-6` suggestion in the error message actively misdirects users away from the current model. Source: Jobdori live dogfood, `6c0c305a`, 2026-05-11.
425. **Config file precedence (`.claw/settings.json` always wins over `.claw.json`) is undocumented in user-facing surfaces — `config --output-format json` reports both files as `loaded:true` with no `precedence_rank` or `wins_for_keys` attribution; sibling: deprecation warning fires 4× per status invocation (was 3× in #424, regression upward)** — dogfooded 2026-05-11 by Jobdori on `d7dbe951` in response to Clawhip pinpoint nudge at `1503237744451649537`. Reproduction: create `.claw.json` with `{"model":"anthropic/claude-sonnet-4-6"}` and `.claw/settings.json` with `{"model":"anthropic/claude-opus-4-7"}` in the same workspace. `claw status --output-format json` returns `model:"anthropic/claude-opus-4-7", model_source:"config"`. Reverse the files (.claw.json=opus, settings.json=sonnet) → `model:"anthropic/claude-sonnet-4-6"`. Confirmed: `.claw/settings.json` **always** wins over `.claw.json` for conflicting keys, regardless of file mtime or alphabetical order. `claw config --output-format json` reports both as `loaded:true` with no `precedence_rank`, `effective_for_keys`, or `shadowed_keys` attribution. The only signal of precedence is the final merged value in `status` — automation cannot programmatically discover which file contributed which key without re-implementing the merge logic. **Sibling bug (regression from #424):** the `enabledPlugins` deprecation warning now fires **4 times** in stderr per single `status` invocation (was 3× in #424's probe at HEAD `6c0c305a`; current HEAD `d7dbe951` shows 4×). Config load count went up by 1. **Sibling bug observed in config-section probe:** `claw config model --output-format json` with a `.claw.json` that contains a benign unknown key (e.g., `"alpha":"x"`) returns `{"error":"/path/.claw.json: unknown key \"alpha\" (line 1)","kind":"unknown"}` — the entire config command fails with a generic `unknown` kind instead of (a) tolerating unrecognized keys with a warning, or (b) emitting a typed `kind:"unknown_key"` error scoped to the offending file/key. **Required fix shape:** (a) document precedence order in `USAGE.md` (`.claw/settings.local.json > .claw/settings.json > .claw.json` for project scope; `user`/`system` scope at each layer); (b) add `precedence_rank:int` and optional `wins_for_keys:[string]` / `shadowed_keys:[string]` to each entry in `config --output-format json` `files[]`; (c) dedupe the deprecation warning to fire **once per discovered file** instead of N× per load pass; (d) make `config <section> --output-format json` tolerate unknown keys with warnings, OR emit `kind:"unknown_key"` with `path:` and `key:` fields scoped to the offending file. **Why this matters:** users mixing legacy `.claw.json` with new `.claw/settings.json` have no way to verify which file is actually controlling their runtime. The undocumented precedence + missing per-key attribution forces trial-and-error to debug config drift. Cross-references #407 (config files no load_error) and #415 (config section returns merged_keys count not values). Source: Jobdori live dogfood, `d7dbe951`, 2026-05-11.
426. **`ANTHROPIC_MODEL` env var bypasses the `invalid_model_syntax` validator that `--model` enforces — bogus model strings are accepted with `status:"ok"`, deferred-failing only when the first API call is made** — dogfooded 2026-05-11 by Jobdori on `3730b459` in response to Clawhip pinpoint nudge at `1503245298800136296`. Reproduction (asymmetric validation): `claw --model bogus-model-xyz status --output-format json` returns `kind:"invalid_model_syntax"` exit 1; `ANTHROPIC_MODEL=bogus-model-xyz claw status --output-format json` returns `model:"bogus-model-xyz", model_raw:"bogus-model-xyz", model_source:"env", status:"ok"` — the doctor surface lies that the configured model is valid when it is not. The bogus model only manifests as a failure when the first prompt fires and the API rejects it with 404/400. Three sibling discoveries in the same probe: (a) **alias indirection invisible**: `ANTHROPIC_MODEL=opus claw status --output-format json` returns `model:"claude-opus-4-6", model_raw:"opus", model_source:"env"` — the `opus` alias resolves to `claude-opus-4-6` (the *previous* frontier, not the current `claude-opus-4-7` released 2026-04-16). Users typing `opus` get yesterday's model with no warning. (b) **`CLAW_MODEL` env var silently ignored**: `CLAW_MODEL=opus claw status` shows `model:"claude-opus-4-6" model_source:"default"` — the `CLAW_MODEL` env var (the project-namespaced equivalent that users expect) does not exist; only `ANTHROPIC_MODEL` is honored. No warning when a `CLAW_*` env var that looks like it should work is set. (c) **`ANTHROPIC_DEFAULT_MODEL` also silently ignored**: the longer-named env var that some Anthropic SDKs use is not recognized. **Required fix shape:** (a) symmetric validation: `ANTHROPIC_MODEL` env value must pass the same `invalid_model_syntax` check that `--model` does, and `claw status` must return `kind:"invalid_model"` / `status:"warn"` (not `status:"ok"`) when the resolved model is unrecognized; (b) expose alias resolution in `status`: add `model_alias_resolved_to:string|null` field so automation can see `opus → claude-opus-4-6`; (c) bump the `opus` alias to `claude-opus-4-7` (current frontier) or document the alias-to-version mapping policy explicitly; (d) accept `CLAW_MODEL` and `ANTHROPIC_DEFAULT_MODEL` env vars with parity to `ANTHROPIC_MODEL`, OR emit a warning when those env vars are set but unrecognized. **Why this matters:** the most common automation pattern is `export ANTHROPIC_MODEL=...` in a shell rc file. Bogus values pass silently, alias indirection hides the actual model in use, and `CLAW_MODEL` looking like a working name but doing nothing is a footgun. Cross-references #424 (bare canonical names rejected at validator level) — together #424 + #426 make model selection inconsistent across CLI flag, env var, and alias paths. Source: Jobdori live dogfood, `3730b459`, 2026-05-11.
427. **Subcommand `--help` paths (`resume`, `session`, `compact`) hit the auth gate and trigger config validation before returning static help — `claw resume --help` with no credentials returns `missing_credentials` error instead of help text** — dogfooded 2026-05-11 by Jobdori on `1fecdf09` in response to Clawhip pinpoint nudge at `1503252843669491892`. Reproduction (no env vars, isolated `CLAW_CONFIG_HOME`): `claw resume --help` returns `{"error":"missing Anthropic credentials; export ANTHROPIC_AUTH_TOKEN or ANTHROPIC_API_KEY..."}` instead of usage text. Same for `claw session --help`, `claw compact --help`. By contrast, `claw prompt --help` and `claw --help` (top-level) return proper usage text without auth. Even worse: with a broken `.claw.json` discovered up the parent directory tree (e.g., `mcpServers.missing-command: missing string field command`), the subcommand `--help` paths fail with `[error-kind: unknown]` from config validation — config load is happening before `--help` is parsed. **Sibling exit-code bug:** `claw resume --help --output-format json` returns `kind:"missing_credentials"` but exits **0** (the exit-code parity bug from #422 reproduces on this path too — only `cli_parse` exits 1 consistently). **Sibling: `claw resume <bogus-id>` should be local-only** but also hits `missing_credentials``resume` of a session that doesn't exist on disk should return `kind:"session_not_found"` from a local lookup, not require API credentials. Same class as ROADMAP #357 (session list requires creds) and #369 (session help/fork require credentials) — now confirmed for `resume`. **Required fix shape:** (a) `--help` MUST short-circuit before any auth check, config load, or session resolution — emit static usage text from a compiled-in string table, no I/O; (b) `resume <id>` must check the local session store first; if the id is absent on disk, emit `kind:"session_not_found"` with `sessions_dir` field; only require auth when resuming a known-on-disk session that requires re-establishing API context; (c) ensure exit code 1 for all error envelopes including `missing_credentials` returned from a `--help` path that should never have reached the auth gate; (d) regression test: with empty `CLAW_CONFIG_HOME` and no env vars, every `claw <subcommand> --help` returns usage text on stdout, exit 0, no `kind:*_error` envelope. **Why this matters:** `--help` is the universal CLI discovery primitive. Failing `--help` because of missing API credentials or broken config files makes claw undiscoverable to users debugging an already-broken setup. Cross-references #357 (session list), #369 (session help/fork), #422 (exit code parity), #108 (subcommand fallthrough). Source: Jobdori live dogfood, `1fecdf09`, 2026-05-11.
428. **Default `permission_mode` is `danger-full-access` — claw runs with FULL filesystem + network + tool access out of the box, with no opt-in flag and no warning from `doctor`** — dogfooded 2026-05-11 by Jobdori on `72048449` in response to Clawhip pinpoint nudge at `1503260393622212628`. Reproduction (no env vars, isolated `CLAW_CONFIG_HOME`, no config files, no CLI flags): `claw status --output-format json` returns `permission_mode:"danger-full-access"` as the default. The three supported modes per the validator error message are `read-only`, `workspace-write`, `danger-full-access` — and `danger-full-access` is chosen with zero user opt-in. `claw doctor --output-format json` produces a `sandbox` check with `status:"warn", summary:"sandbox was requested but is not currently active"` (because macOS lacks Linux `unshare`), but **emits no warning, info, or summary about the permission_mode itself being danger-full-access**. There is no `permissions` check in `doctor` output at all. **Required fix shape:** (a) change default `permission_mode` to `workspace-write` (safe-by-default: filesystem write limited to cwd, network limited to LLM endpoints, no arbitrary command exec); (b) require explicit `--permission-mode danger-full-access` or `--dangerously-skip-permissions` to opt into full access; (c) add a `permissions` check to `doctor --output-format json` that emits `status:"warn"` when `permission_mode == "danger-full-access"` without explicit source (flag/env/config), with details like `mode:"danger-full-access", source:"default", message:"running with full access without explicit opt-in"`; (d) document the three modes and the default in USAGE.md with one-paragraph descriptions of what each mode allows. **Sibling typed-error bug:** `claw --permission-mode bogus-mode status --output-format json` returns `kind:"unknown"` instead of `kind:"invalid_permission_mode"` — same catch-all problem as #424, #426. **Sibling flag-name asymmetry:** `--dangerously-skip-permissions` works but `--skip-permissions` (Claude Code's flag) returns `kind:"cli_parse"` `unknown option`. Users migrating from Claude Code lose the short flag name. **Why this matters:** every other security-conscious CLI (Docker, kubectl, terraform) requires explicit opt-in for dangerous modes. Defaulting to `danger-full-access` is a footgun for first-time users who pipe `curl install.sh | sh` and immediately get a tool with full filesystem write and arbitrary command exec. The doctor surface is the only diagnostic users consult before trusting the tool, and it stays silent about the most permissive setting. Cross-references #50, #87, #91, #94, #97, #101, #106, #115, #123 (permission-audit sweep) — those all cover permission *rule* and *list* surfaces; #428 covers the *mode default* itself. Source: Jobdori live dogfood, `72048449`, 2026-05-11.
429. **No global `--cwd`/`-C`/`--directory` flag — `claw` cannot be invoked against an arbitrary working directory without first `cd`-ing into it; `--cwd` only exists as a subcommand option for `system-prompt`, and the `cli_parse` "Did you mean --acp?" suggestion is misleading (the `--acp` flag is unrelated to directory selection)** — dogfooded 2026-05-11 by Jobdori on `ec882f4c` in response to Clawhip pinpoint nudge at `1503267943285264394`. Reproduction: `claw --cwd /tmp/claw-dog-cwd status --output-format json``{"error":"unknown option: --cwd","hint":"Did you mean --acp?\nRun `claw --help` for usage.","kind":"cli_parse"}`. Same error for `--cwd <relative>`, `--cwd <nonexistent>`, `--cwd <file-not-dir>`, `--cwd ""`. Inspecting `claw --help`: `--cwd PATH` appears ONLY in the usage line `claw system-prompt [--cwd PATH] [--date YYYY-MM-DD]` — it is not a global flag and is not accepted by `status`, `doctor`, `mcp list`, `init`, or any other subcommand. Users programmatically running claw against multiple workspaces must `cd` into each one before invoking, breaking the `subprocess.run(['claw', 'status', '--cwd', ws], cwd=other_dir)` pattern that every other major CLI (cargo `-C`, git `-C`, npm `--prefix`, gh `--repo` semantically, kubectl `--kubeconfig`+`--context`) supports. **Sibling misleading-suggestion bug:** the `cli_parse` error's `hint` field suggests `Did you mean --acp?` for `--cwd`. `--acp` is the alias for ACP/Zed editor integration (entirely unrelated to working directory). The Levenshtein-distance auto-complete is matching on first-character similarity without considering semantic relatedness. Users following the hint get a totally orthogonal feature. **Required fix shape:** (a) add a global `--cwd PATH` / `-C PATH` flag accepted before any subcommand, parsed in the global flag pre-pass; (b) validate the path exists and is a directory; emit `kind:"invalid_cwd"` with `path:` and `reason:` (`"not_found"`/`"not_a_directory"`/`"empty"`) when validation fails; (c) document the precedence: `--cwd` flag > `$PWD` > `env::current_dir()`; (d) fix the "Did you mean" hint algorithm to filter suggestions by semantic category (don't suggest `--acp` for `--cwd`; suggest `claw system-prompt --cwd PATH` if the user clearly wants `cwd` override but used the wrong scope); (e) regression test: `claw --cwd /tmp status --output-format json` from any `$PWD` returns `workspace.cwd:"/private/tmp"` (or `cwd:"/tmp"` after #421 fix). **Why this matters:** every claw automation orchestrator runs claw against multiple workspaces from a single parent process. Forcing `cd` before each invocation breaks parallelism (can't use shared cwd across concurrent invocations), breaks subprocess wrappers that want to pass cwd explicitly, and breaks `xargs`/`parallel`-style pipelines. Cross-references #421 (cwd canonicalization leak — fix should canonicalize but report user-input via `--cwd`). Source: Jobdori live dogfood, `ec882f4c`, 2026-05-11.
430. **`dump-manifests` is documented as "emit every skill/agent/tool manifest the resolver would load for the current cwd" but actually requires the upstream Claude Code TypeScript source files (`src/commands.ts`, `src/tools.ts`, `src/entrypoints/cli.tsx`) — the command is unusable for any user who installed claw without cloning the original Claude Code repo** — dogfooded 2026-05-11 by Jobdori on `075c2144` in response to Clawhip pinpoint nudge at `1503275502046023690`. Reproduction: `claw dump-manifests --output-format json` returns `{"error":"Manifest source files are missing.","hint":"repo root: /private/tmp/claw-dog-0530\n missing: src/commands.ts, src/tools.ts, src/entrypoints/cli.tsx\n Hint: set CLAUDE_CODE_UPSTREAM=/path/to/upstream or pass \`claw dump-manifests --manifests-dir /path/to/upstream\`.","kind":"missing_manifests"}`. The fresh-main worktree at `/private/tmp/claw-dog-0530` does not contain these TypeScript files because the Rust port doesn't include the upstream TS source. The `--help` text says the command works against "the current cwd" but in practice it requires `CLAUDE_CODE_UPSTREAM=` pointing at an unshipped TS source tree. **Three sibling problems compounded:** (a) **derivative-work disclosure leak**: the error message exposes that `claw-code` is a port of Claude Code (`CLAUDE_CODE_UPSTREAM` env var name) — even if true, surfacing this in a casual diagnostic message couples user-facing behavior to upstream provenance details. (b) **kind drift**: `claw dump-manifests --manifests-dir /tmp/nonexistent --output-format json` returns `kind:"unknown"`, while `claw dump-manifests` (no override) returns `kind:"missing_manifests"`. Same root cause (no usable upstream), two different `kind` discriminators — automation cannot switch on a single error type. (c) **export-positional-arg silently dropped**: probed in the same run — `claw export <bogus-positional>` ignores the path and returns `kind:"no_managed_sessions"` regardless of what positional arg was passed. The `--help` advertises `[PATH]` as the output-file destination but the path is discarded before validation, indistinguishable from invocation with no args. **Required fix shape:** (a) make `dump-manifests` emit the manifests claw-code itself ships with (Rust-resolver-discovered skills/agents/tools), independent of any upstream TS source — that matches the `--help` description; (b) if upstream-comparison is genuinely needed for parity work, move it to a separate command like `parity dump-upstream-manifests` and remove the upstream dependency from `dump-manifests`; (c) standardize on one error `kind` for the manifest-missing failure mode (`missing_manifests` is more descriptive than `unknown`); (d) `claw export <PATH>` must validate the path positional arg before the session-discovery check, so users see `kind:"invalid_output_path"` (or similar) when the path is malformed instead of always seeing `kind:"no_managed_sessions"`. **Why this matters:** `dump-manifests` is the inventory surface a downstream automation lane would call to learn what claw can do in the current workspace. If it's broken without upstream TS source, downstream lanes can't introspect — they have to fall back to `agents list`/`skills list`/`mcp list` separately and re-aggregate. Cross-references #422 (kind:unknown for unknown_subcommand), #423 (kind:unknown for missing_argument), #428 (kind:unknown for invalid_permission_mode) — `kind:"unknown"` keeps appearing as the catch-all for surfaces that should have typed kinds. Source: Jobdori live dogfood, `075c2144`, 2026-05-11.
431. **`skills uninstall <name>` requires Anthropic credentials despite being a local filesystem operation — `claw skills uninstall nonexistent-skill-xyz --output-format json` returns `kind:"missing_credentials"` instead of resolving locally that the skill doesn't exist** — dogfooded 2026-05-11 by Jobdori on `328fd114` in response to Clawhip pinpoint nudge at `1503275502046023690` (sibling probe to #430). Reproduction (no creds, isolated `CLAW_CONFIG_HOME`): `claw skills uninstall nonexistent-skill-xyz --output-format json` returns `{"error":"missing Anthropic credentials; export ANTHROPIC_AUTH_TOKEN or ANTHROPIC_API_KEY...","kind":"missing_credentials"}`. Uninstalling a skill is a pure local filesystem operation: read the skills directory, find the named skill, remove its files. There is no semantic reason to require API credentials. Same class of bug as #357 (`session list` requires creds), #369 (`session help/fork` require creds), and #427 (`resume <bogus-id>` requires creds). **Three sibling findings in same probe:** (a) `claw skills install <bogus-name>` returns `{"error":"No such file or directory (os error 2)","kind":"unknown"}` — leaks raw OS error string with no hint about expected install source format (path vs name vs URL?), and the catch-all `kind:"unknown"` again instead of typed `kind:"skill_install_source_not_found"`. (b) `claw skills install` (no args) returns `action:"help"` with `unexpected:"install"` — but `install` IS a documented subcommand. The handler treats it as "unknown action" instead of "missing required argument". Should emit `kind:"missing_argument"` with `argument:"install_source"`. (c) `claw agents create my-agent` returns `action:"help"` with `unexpected:"create my-agent"` — there is no agent-creation surface at all. Users must hand-craft `.claw/agents/<name>.md` files with no scaffolding command, while `claw init` only creates the top-level `.claw/` skeleton. **Required fix shape:** (a) `skills uninstall <name>` must be local-first: enumerate the local skills dir, return `kind:"skill_not_found"` (with `skills_dir:` and `available_names:[]` fields) for missing, or remove the files and return `kind:"skills"` with `action:"uninstall", removed:<name>` for present skills; (b) `skills install <source>` must distinguish source forms (`path:`, `name:`, `url:`) and emit `kind:"invalid_install_source"` with the parsed-and-failed reason; (c) `skills install` (no args) emits `kind:"missing_argument"` with `argument:"install_source"`; (d) add `claw agents create <name>` (or `claw init agent <name>`) that scaffolds `.claw/agents/<name>.md` with a stub frontmatter; or document explicitly that agents are user-authored only. **Why this matters:** lifecycle commands (`uninstall`, `install`, `create`) are the primary surface for managing claw's extension surface area. If `uninstall` requires API creds, an offline user who fat-fingered an install can't undo it. If `install` returns a raw OS error, automation can't programmatically recover. If `agents create` doesn't exist, agent authoring is undocumented file-touching only. Cross-references #357, #369, #427 (auth-gate-on-local-ops cluster), and #422/#423/#428/#430 (`kind:"unknown"` catch-all cluster). Source: Jobdori live dogfood, `328fd114`, 2026-05-11.
432. **`--allowedTools` validator inconsistency: tool name list is half snake_case (`bash`, `read_file`, `write_file`, `edit_file`, `glob_search`, `grep_search`) and half PascalCase (`WebFetch`, `WebSearch`, `TodoWrite`, `Skill`, `Agent`, `Sleep`) with three UPPERCASE entries (`REPL`, `LSP`, `MCP`); accepts undocumented CamelCase aliases (`Read`, `Write`, `Edit`) and silently translates them to snake_case; argument parsing consumes the next positional when value is missing** — dogfooded 2026-05-11 by Jobdori on `fad53e2d` in response to Clawhip pinpoint nudge at `1503283046856655029`. Reproduction: `claw --allowedTools status --output-format json``{"error":"unsupported tool in --allowedTools: status (expected one of: bash, read_file, write_file, edit_file, glob_search, grep_search, WebFetch, WebSearch, TodoWrite, Skill, Agent, ToolSearch, NotebookEdit, Sleep, SendUserMessage, Config, EnterPlanMode, ExitPlanMode, StructuredOutput, REPL, PowerShell, AskUserQuestion, TaskCreate, RunTaskPacket, TaskGet, TaskList, TaskStop, TaskUpdate, TaskOutput, WorkerCreate, WorkerGet, WorkerObserve, WorkerResolveTrust, WorkerAwaitReady, WorkerSendPrompt, WorkerRestart, WorkerTerminate, WorkerObserveCompletion, TeamCreate, TeamDelete, CronCreate, CronDelete, CronList, LSP, ListMcpResources, ReadMcpResource, McpAuth, RemoteTrigger, MCP, TestingPermission)","kind":"unknown"}`. The `status` subcommand was consumed as the `--allowedTools` value because the flag parser doesn't distinguish missing-value from end-of-flag-args. The error reveals **the supported tool list mixes naming conventions inconsistently within a single error message**: snake_case (`bash`, `read_file`, `write_file`, `edit_file`, `glob_search`, `grep_search`), PascalCase (`WebFetch`, `WebSearch`, `TodoWrite`, `Skill`, `Agent`, `Sleep`, `Config`, `PowerShell`, `AskUserQuestion`, `TaskCreate`, `WorkerCreate`, `TeamCreate`, `CronCreate`), UPPERCASE (`REPL`, `LSP`, `MCP`), and CamelCase compounds (`McpAuth`, `RemoteTrigger`). **Hidden alias mapping**: `claw --allowedTools Read,Write,Edit status --output-format json` is accepted and returns `allowed_tools.entries:["edit_file","read_file","write_file"]` — proving the validator has an undocumented CamelCase→snake_case alias map (`Read``read_file`, `Write``write_file`, `Edit``edit_file`) that is not surfaced in the error message. Users who copy-paste tool names from Claude Code documentation work, users who copy from the validator error don't. **Sibling missing-value bug:** `claw --allowedTools status` with `status` as a positional subcommand is interpreted as `--allowedTools=status`, swallowing the subcommand. The flag parser must require a value for `--allowedTools` and emit `kind:"missing_argument"` when followed by a recognized subcommand or `--`-prefixed flag instead of silently treating the next arg as a tool name. **Sibling typed-kind bug:** both errors use `kind:"unknown"` instead of typed `kind:"invalid_tool_name"` / `kind:"missing_argument"` — the catch-all keeps appearing (#422/#423/#424/#428/#430/#431/#432). **Required fix shape:** (a) standardize the canonical tool-name registry on one casing convention (snake_case is most CLI-ergonomic) and update both the registry and all CamelCase aliases; (b) document and expose the alias map (`tool_aliases:{Read:"read_file",...}`) in `claw doctor`/`status` and in the validator error; (c) flag parser must require a value for `--allowedTools` and refuse to consume a recognized subcommand or `-`/`--`-prefixed token as the value, emit `kind:"missing_argument"` with `argument:"--allowedTools"`; (d) emit `kind:"invalid_tool_name"` with `tool_name:` and `available:[]` fields instead of `kind:"unknown"`; (e) regression test that `claw --allowedTools <subcommand>` rejects with `missing_argument`, and that the canonical name list in errors uses the same casing as the alias map. **Why this matters:** `--allowedTools` is the primary surface for restricting claw's tool surface area (security-relevant). Inconsistent naming between the validator error and the alias map means users following the error message guidance pick names that work in some places and fail in others. The missing-value bug silently swallows a subcommand, leading to confusing "unsupported tool: status" errors when the user actually wanted to run `claw status`. Cross-references #94/#97/#101/#106/#115/#123 (permission-rule audit), #428 (default permission_mode), #422/#423/#424/#428/#430/#431 (`kind:"unknown"` catch-all). Source: Jobdori live dogfood, `fad53e2d`, 2026-05-11.
433. **Repeated `--output-format` flag silently takes the last value without warning — `claw --output-format json --output-format text status` produces text output, no signal that the prior `json` was overridden; sibling: `--output-format` value is case-sensitive (`JSON` rejected as `kind:"unknown"`); sibling: no `CLAW_OUTPUT_FORMAT` env var for default format override** — dogfooded 2026-05-11 by Jobdori on `ce39d5c5` in response to Clawhip pinpoint nudge at `1503290592556220488`. Reproduction: `claw --output-format json --output-format text status` returns the text-format `Status\n Model claude-opus-4-6...` table — the first `--output-format json` was silently overridden. No warning, no `format_overridden:true` field, no stderr message. Scripts that compose flag arrays from multiple sources (`flags=("${BASE_FLAGS[@]}" --output-format json)` while `BASE_FLAGS` already contains `--output-format text`) silently get the wrong format. **Three sibling findings in same probe:** (a) **case-sensitivity drift**: `claw --output-format JSON status` returns `{"error":"unsupported value for --output-format: JSON (expected text or json)","kind":"unknown"}` — error message tells user to use lowercase `json` but doesn't accept the uppercase form that users often type from muscle memory. Most CLI flag-value validators (cargo, kubectl, gh) are case-insensitive for enum values or accept both forms with normalization. (b) **`kind:"unknown"` for invalid format value**: same catch-all bucket bug as #422/#423/#424/#428/#430/#431/#432 — should be `kind:"invalid_output_format"` with `value:` and `expected:["text","json"]` fields. (c) **no env-var default for output format**: `CLAW_OUTPUT_FORMAT=json claw status` silently ignored — no env override for the global default, forcing scripts to repeat `--output-format json` on every invocation. Other major CLIs honor `KUBECTL_OUTPUT=`, `AWS_DEFAULT_OUTPUT=`, `GH_NO_PROMPT=` etc. (d) **silently-ignored env vars `CLAW_LOG`/`RUST_LOG`**: no env-based log level control surfaced in `claw doctor` — debug logging requires undocumented `RUST_LOG=` (Rust convention) but `claw --help` doesn't mention either. **Required fix shape:** (a) repeated `--output-format` (or any flag that takes a value, not a count flag) emits a warning to stderr (`warning: --output-format specified multiple times; using last value 'text'`) and adds a `format_source:"flag", format_overridden:[]` field to the JSON envelope; (b) accept case-insensitive enum values for `--output-format` (`JSON`, `Json`, `json` all work), document the canonical lowercase form in `--help`; (c) emit `kind:"invalid_output_format"` (not `kind:"unknown"`) when value is invalid; (d) accept `CLAW_OUTPUT_FORMAT` env var as the default for `--output-format`, with flag-overrides-env precedence documented; (e) document `RUST_LOG` / `CLAW_LOG` in `--help` or doctor output as the log-level env vars; (f) regression test: repeated flag emits stderr warning + JSON metadata field; case-insensitive enum accepts all three casings; env-var default is honored when flag is absent. **Why this matters:** scripts that compose flag arrays from multiple sources (CI envs + per-invocation flags) silently get the wrong output format. Case-sensitive enum values trip up users typing from muscle memory. Missing env-var defaults force per-invocation flag repetition. Cross-references #422/#423/#424/#428/#430/#431/#432 (`kind:"unknown"` catch-all cluster). Source: Jobdori live dogfood, `ce39d5c5`, 2026-05-11.
434. **POSIX `--` end-of-flags separator is not recognized — `claw -- "-prompt-with-dash"` returns `{"error":"unknown option: --","hint":"Did you mean -V?","kind":"cli_parse"}` instead of treating subsequent args as positional; shorthand prompt mode cannot accept dash-prefixed prompts at all** — dogfooded 2026-05-11 by Jobdori on `0e5f6958` in response to Clawhip pinpoint nudge at `1503298142286905484`. Reproduction: `claw -- "-prompt-with-dash" --output-format json` returns `{"error":"unknown option: --","hint":"Did you mean -V?\nRun \`claw --help\` for usage.","kind":"cli_parse"}`. The POSIX/GNU CLI convention — universally honored by cargo, git, npm, gh, kubectl, grep, ls, find, etc. — is that `--` terminates flag parsing and treats everything after it as positional arguments. claw rejects `--` itself as an unknown flag. **Sibling misleading-suggestion bug (recurring from #429):** the `cli_parse` hint suggests `Did you mean -V?` for `--`. `-V` is the version flag; `--` is the end-of-flags separator. They have no semantic relationship; the auto-complete is matching on prefix-character similarity only. **Sibling shorthand-prompt limitation:** `claw "-just a prompt" --output-format json` returns `{"error":"unknown option: -just a prompt","kind":"cli_parse"}` and `claw "--bogus-flag-like" --output-format json` returns the same. The shorthand non-interactive prompt mode (documented as `claw [--model MODEL] [--output-format text|json] TEXT`) cannot accept any TEXT that starts with `-` or `--`, even when the entire string is shell-quoted as a single token. Users must use the explicit `prompt` verb (`claw prompt "-prompt-with-dash"` works) to escape this, but the explicit verb is documented as alternative not required. **Required fix shape:** (a) accept POSIX `--` as the end-of-flags marker globally — every arg after `--` is positional; (b) shorthand prompt mode must distinguish "this looks like a flag" from "this is a quoted positional that happens to start with `-`" by looking at whether the token matches any registered flag name (`-h`, `-V`, `--help`, `--version`, etc.) — strings that don't match any flag should be treated as prompt text; (c) fix the "Did you mean" hint algorithm to filter by semantic category (don't suggest `-V` for `--`, suggest "use \`--\` to terminate flag parsing" if the user types just `--`); (d) regression test: `claw -- "-foo"` reaches the runtime with prompt=`-foo`; `claw "-not-a-flag"` is treated as shorthand prompt when no registered flag matches; canonical `--` is recognized. **Why this matters:** POSIX `--` is the universal mechanism for passing arbitrary text (filenames starting with `-`, prompts containing flag-like syntax, log lines, etc.) to a CLI. Failing on `--` makes claw fundamentally unergonomic in shell pipelines (`echo "-q for quiet" | xargs claw` fails). The shorthand-prompt limitation forces users to remember the `prompt` verb specifically when their prompt happens to start with `-`. Cross-references #422 (unknown subcommand fallthrough), #423 (stdin not consumed by prompt), #429 ("Did you mean --acp" misleading suggestion). Source: Jobdori live dogfood, `0e5f6958`, 2026-05-11.
435. **`claw --resume latest` on a fresh workspace exit code is 0 in text mode but 1 in JSON mode (text mode lies about success); sibling: failed `--resume` creates the `.claw/sessions/<fingerprint>/` directory tree as a filesystem side effect of the failure** — dogfooded 2026-05-11 by Jobdori on `e29010ed` in response to Clawhip pinpoint nudge at `1503305692566655096`. Reproduction (fresh empty dir, no `.claw/`, no sessions): `claw --resume latest` (text mode) prints `failed to restore session: no managed sessions found in .claw/sessions/0ead448127a2de44/` and exits **0**. Same invocation with `--output-format json` correctly exits **1** with `kind:"session_load_failed"`. Exit-code parity broken on the same input depending on format flag. **Sibling filesystem-side-effect bug:** after the failed `--resume latest` on a fresh empty workspace, the directory `.claw/sessions/0ead448127a2de44/` (the workspace-fingerprint partition) is created on disk despite the operation failing. The user did not opt into creating workspace metadata — they asked to resume an existing session, the resume failed, and now there's a partition directory hanging around. The fingerprint directory ought to be created lazily on first successful session save, not as a side effect of every resume attempt. **Three sibling findings in the same probe:** (a) **`claw --compact` alone (no other args) drops into the interactive REPL with the ANSI welcome banner** — `--compact` is documented as a modifier that strips tool call details in text mode for piping (`--compact ... useful for piping`), not as a verb that activates the REPL. Running `claw --compact` with no positional should be a no-op or an error explaining the flag needs a subcommand or prompt; entering the REPL is the wrong default. (b) **`claw --compact "hello"` (shorthand prompt) returns `{"error":"unknown subcommand: hello.","hint":"Did you mean help","kind":"unknown"}``--compact` disables shorthand prompt mode entirely**, treating the positional as a subcommand instead of as prompt text. Users must use the explicit `prompt` verb (`claw --compact prompt "hello"`) which contradicts the `claw [flags] TEXT` usage line in `--help`. (c) `kind:"unknown"` again for the unknown-subcommand error in --compact path — same catch-all bucket bug appearing for the 11th time across pinpoints. **Required fix shape:** (a) exit code 1 for all `failed_to_restore` / `session_load_failed` text-mode failures; text mode should print to stderr and exit non-zero, not print to stdout and exit 0; (b) defer `.claw/sessions/<fingerprint>/` creation to first successful save; failed `--resume` must not leave filesystem droppings; (c) `claw --compact` alone (no positional, no subcommand, stdin is TTY) should emit `kind:"missing_argument"` with `argument:"prompt or subcommand"` rather than activating the REPL; (d) `--compact` must be transparent to shorthand prompt mode parsing — `claw --compact "hello"` is equivalent to `claw --compact prompt "hello"`, both should reach the prompt path; (e) emit typed `kind:"unknown_subcommand"` not `kind:"unknown"` for fallthrough cases. **Why this matters:** scripts that gate on `$?` after `claw --resume latest` see success on text mode and failure on JSON mode — the same operation, two outcomes. The filesystem side effect pollutes a user's worktree with workspace partitions they didn't ask for, and CI pipelines that snapshot `.claw/` size silently grow on every failed `--resume`. Cross-references #422 (exit-code parity across error envelopes), #423 (`kind:"unknown"` for `missing_argument`), #434 (shorthand prompt limitations). Source: Jobdori live dogfood, `e29010ed`, 2026-05-11.
436. **`claw init` shipped `.claw.json` template explicitly sets `permissions.defaultMode:"dontAsk"` — every user who runs `claw init` gets a config file that disables permission prompts by default; sibling: `init` creates an empty `.claw/` directory with no settings.json template inside, and when `.claw/` already exists it skips the whole artifact (no settings template materialized)** — dogfooded 2026-05-11 by Jobdori on `b8f989b6` in response to Clawhip pinpoint nudge at `1503313241751949335`. Reproduction: `mkdir /tmp/probe && cd /tmp/probe && claw init --output-format json` returns `artifacts:[{name:".claw/",status:"created"},{name:".claw.json",status:"created"},...]`. Inspecting the created `.claw.json`: `{"permissions":{"defaultMode":"dontAsk"}}`. This is the polar opposite of safe-by-default: every user who follows the documented onboarding flow (`claw init` after `curl install.sh`) ships their workspace with permission prompts disabled. Compounds with **#428** (default runtime permission_mode is `danger-full-access`) — between the runtime default and the init template, a fresh claw setup has zero user-facing safety friction. **Sibling: `.claw/` artifact is an empty directory.** After `claw init`, `find .claw -type f` returns nothing. No `settings.json`, no template, no scaffolding — just `mkdir .claw`. The `--help` description implies init produces a usable workspace, but `.claw/settings.json` (the project-scope counterpart of `~/.claw/settings.json`) is never templated. **Sibling: `.claw/` skip-on-exists drops the entire artifact.** If `.claw/` already exists (e.g., from a partial setup, a `--resume` failure side effect per #435, or manual creation), `claw init` returns `.claw/: skipped` and does not materialize any expected sub-content. The other artifacts (`.claw.json`, `.gitignore`, `CLAUDE.md`) are still created, but a future `claw skills install` or `claw plugins enable` may expect `.claw/` to contain template files that are now missing. **Required fix shape:** (a) the shipped `.claw.json` template must default to `permissions.defaultMode:"acceptEdits"` or `"plan"` (safe-by-default modes per #428 spec) — `"dontAsk"` requires explicit opt-in; (b) `claw init` must materialize `.claw/settings.json` with documented schema defaults inside `.claw/` so the directory is useful on its own; (c) when `.claw/` already exists, `init` must report `partial` status (not `skipped`) and still try to create missing sub-files like `.claw/settings.json` without overwriting existing files; (d) emit per-sub-file artifact entries for `.claw/settings.json` and `.claw/sessions/` (skipped status if absent, deferred-to-first-save acceptable) so automation knows what's present; (e) regression test: `claw init` produces a `.claw.json` whose `permissions.defaultMode` is NOT `dontAsk`; `.claw/` contains at least one templated file. **Why this matters:** init is the primary onboarding surface. Every first-time user piping `curl install.sh | sh && claw init` gets a workspace pre-configured to skip permission prompts — and that workspace gets committed to the user's repo via the `init`-added entry. The `.claw/` empty-directory bug means feature discovery (skills, plugins) lacks the scaffolding it implies. Cross-references #428 (runtime default permission_mode), #50/#87/#91/#94/#97/#101/#106/#115/#123 (permission-rule audit), #435 (filesystem side effects on failed resume). Source: Jobdori live dogfood, `b8f989b6`, 2026-05-11.
437. **`version --output-format json` omits build provenance fields — no `is_dirty`, `branch`, `commit_date`, `commit_timestamp`, `rustc_version`; `git_sha` is truncated to 7 chars instead of full 40-char hash; sibling: `executable_path` leaks the build host's path (`/tmp/claw-dog-0530/...`) into runtime output** — dogfooded 2026-05-11 by Jobdori on `8cf628a5` in response to Clawhip pinpoint nudge at `1503320791582900344`. Reproduction: `claw version --output-format json` returns `{"build_date":"2026-05-11","executable_path":"/tmp/claw-dog-0530/rust/target/release/claw","git_sha":"b98b9a7","kind":"version","message":"Claw Code\n Version 0.1.0\n Git SHA b98b9a7\n Target aarch64-apple-darwin\n Build date 2026-05-11","target":"aarch64-apple-darwin","version":"0.1.0"}`. Critical provenance fields missing: (a) **`is_dirty`** — was the working tree clean at build time? Automation that pins on build provenance cannot tell if the binary was built from a clean commit or includes uncommitted changes; (b) **`branch`** — was this built from `main`, `dev/rust`, a release tag, or a feature branch? The `git_sha` alone doesn't reveal the integration point; (c) **`commit_date` / `commit_timestamp`** — only `build_date` (when the binary was compiled) is exposed; the commit itself might be days/weeks older if the build happened later. Reproducibility audits need both; (d) **`rustc_version`** — what Rust compiler version produced this binary? Critical for security advisories (e.g., known regressions in specific rustc versions); (e) **`git_sha` truncated to 7 chars** ("b98b9a7" instead of full "b98b9a71..."): 7-char shas have known collision rates in large repos and prevent unambiguous git rev-parse round-trip. **Sibling: `executable_path` leaks build-host path.** The `executable_path` field returns `/tmp/claw-dog-0530/rust/target/release/claw` — the directory where the binary was compiled, embedded into the binary metadata. For a binary copied/installed/symlinked to a different location, this field still reports the build path, not the actual invocation path. Either the field should reflect the runtime path via `std::env::current_exe()` at runtime (not compile-time), or it should be dropped to avoid leaking compile-host filesystem layout. **Sibling: prose `message` field duplicates structured data.** The `message` field still contains the entire text-mode prose version block (`"Claw Code\n Version 0.1.0\n Git SHA b98b9a7\n..."`) — every field present as structured JSON (`version`, `git_sha`, `target`, `build_date`) is also embedded in the prose. Same issue as #391 (`version json includes prose message field`) which was closed as "fixed" — the prose remains. **Required fix shape:** (a) add `is_dirty:bool`, `branch:string|null`, `commit_date:string` (ISO-8601), `commit_timestamp:int` (Unix epoch), `rustc_version:string` to the JSON envelope; (b) preserve full 40-char `git_sha` and add `git_sha_short:string` as a derived field if 7-char form is needed for UX; (c) `executable_path` should be `std::env::current_exe()` at runtime, not the compile-time path; (d) drop the prose `message` field from JSON or rename it `human_readable:string` and make it explicitly secondary to the structured fields; (e) re-verify #391 closure — the prose `message` is still present, the fix didn't fully land. **Why this matters:** version surface is the canonical provenance probe for security audits, build reproducibility, and bug-report metadata. Missing `is_dirty` means automated triage cannot distinguish "issue against a clean main commit" from "issue against a developer's uncommitted hack". Truncated `git_sha` blocks unambiguous git lookup. Leaked `executable_path` exposes build-host layout. Cross-references #391 (version prose duplication — apparently not fully fixed), #334 (version json omits build_date — fixed, but partial scope), #100 (commit identity audit). Source: Jobdori live dogfood, `8cf628a5`, 2026-05-11.
438. **Memory file discovery only recognizes `CLAUDE.md` — `AGENTS.md` (industry convention used by OpenCode/Codex/Aider/Cursor) and `CLAW.md` (project's own brand name) are silently ignored despite being present in the workspace** — dogfooded 2026-05-11 by Jobdori on `d3a982dd` in response to Clawhip pinpoint nudge at `1503328341422244012`. Reproduction (fresh empty dir, isolated `CLAW_CONFIG_HOME`): create three files in cwd — `CLAUDE.md` (marker `MARKER-FROM-CLAUDE-MD`), `AGENTS.md` (marker `MARKER-FROM-AGENTS-MD`), `CLAW.md` (marker `MARKER-FROM-CLAW-MD`). Run `claw status --output-format json``workspace.memory_file_count: 1`. Run `claw system-prompt --output-format json` and search the `message` field for each marker: only `MARKER-FROM-CLAUDE-MD` is found; `MARKER-FROM-AGENTS-MD` and `MARKER-FROM-CLAW-MD` are absent. `claw-code` exclusively recognizes the Claude-branded filename inherited from upstream Claude Code; the project's own `CLAW.md` brand name and the cross-tool industry convention `AGENTS.md` are both silently dropped. **Three sibling implications:** (a) **brand-consistency gap**: a project rebranded from Claude Code to Claw Code that introduces `CLAUDE.md` as its only memory file is internally inconsistent. Users naturally expect `claw <subcommand>` to read `CLAW.md`. (b) **industry-convention gap**: `AGENTS.md` is the convergent convention for OpenCode (oh-my-opencode/sisyphus), OpenAI Codex CLI, Aider, Cursor, Continue.dev, and most ACP harnesses. Users with mixed-tool workflows maintain a shared `AGENTS.md` and expect every AI coding tool to honor it. (c) **silent failure mode**: there is no warning when `AGENTS.md` or `CLAW.md` exist but are not loaded. Users who copy-paste `AGENTS.md` from another tool's docs see `memory_file_count` stay at 0 or 1 and have to guess why their instructions aren't applied. **Required fix shape:** (a) discover and load **`CLAUDE.md`, `CLAW.md`, `AGENTS.md`** in that priority order (existing config-precedence pattern); (b) all three contribute to `memory_file_count` with `memory_files:[{path, source:"claude_md"|"claw_md"|"agents_md", chars}]` array exposed in `status --output-format json`; (c) when multiple files exist, merge or document the precedence: project-specific `CLAUDE.md`/`CLAW.md` overrides industry-shared `AGENTS.md`; (d) `claw doctor --output-format json` adds a `memory` check that warns when `AGENTS.md` exists but is not the loaded variant (alerting users that they may be relying on the wrong file); (e) regression test: workspace with all three files results in `memory_file_count >= 1` and the system prompt contains markers from at least the highest-precedence file. **Why this matters:** `AGENTS.md` is the lingua-franca instruction file for cross-tool AI coding workflows. A team using OpenCode for one project and Claw Code for another keeps their conventions in a shared `AGENTS.md`. Forcing them to also maintain a `CLAUDE.md` for claw-code (with identical content) is friction that breaks the value proposition of a fork. Cross-references #438 itself (the multi-file convention), and AGENTS.md ecosystem references in oh-my-opencode/sisyphus docs. Source: Jobdori live dogfood, `d3a982dd`, 2026-05-11.
439. **Memory file discovery walks ALL ancestor directories up to `$HOME` boundary, silently loading any `CLAUDE.md` it finds — `/tmp/CLAUDE.md` left from a previous test silently bleeds into every project under `/tmp/*/`; no `--no-parent-memory` flag, no `.no-claude-md-boundary` marker file to limit discovery scope** — dogfooded 2026-05-11 by Jobdori on `f4a96740` in response to Clawhip pinpoint nudge at `1503335892461293675`. Reproduction: create three nested `CLAUDE.md` files with unique markers — `/tmp/claw-nested-probe/CLAUDE.md` (`PARENT_CLAUDE`), `subproj/CLAUDE.md` (`CHILD_CLAUDE`), `subproj/deep/CLAUDE.md` (`DEEP_CLAUDE`). Run `claw system-prompt --output-format json` from `subproj/deep/nest/` (note: `nest` has no `CLAUDE.md`). The `message` field contains **all three markers** (PARENT + CHILD + DEEP) and `status --output-format json` reports `memory_file_count: 3`. Boundary tests: (a) `$HOME/CLAUDE.md` is NOT picked up from `/tmp/no-claude-dir` (discovery stops at `$HOME` boundary, good); (b) From `/tmp/deep` (no nested CLAUDE.md), `/tmp/CLAUDE.md` IS picked up (count: 1); (c) git-root is NOT a discovery boundary — running from a git subdir still walks above the git root. **Ambient-context-bleed footgun:** any stale `/tmp/CLAUDE.md` (or `/home/<user>/projects/CLAUDE.md`, or any ancestor-path CLAUDE.md left over from a previous experiment, copy-paste, or AI-generated example) silently bleeds into every workspace nested below it. The user has no signal in `status --output-format json` indicating which ancestor file is contributing — only the aggregate `memory_file_count`. **Three required fixes:** (a) **expose discovery list**: `status --output-format json` and `system-prompt --output-format json` must include `memory_files:[{path, source:"workspace"|"ancestor"|"parent_dir"|"home", chars, contributes:bool}]` so users can see what's leaking in; (b) **add `--no-parent-memory` flag** to limit discovery to cwd only (no ancestor walk), or add a boundary marker (`.claude-no-walk`, `.claw-root`, or honor `.git` as the boundary by default — most users expect repo-root scope); (c) **`doctor` warns** when ancestor `CLAUDE.md` files are loaded from outside the current git repo (suggests they may be unintentional). **Sibling discovery scope question:** discovery walks up to `$HOME` — but for a user with a project at `/Users/foo/work/proj`, that's `/Users/foo/work/CLAUDE.md` + `/Users/foo/CLAUDE.md` (if it exists) both load. The home boundary is exclusive, but the entire `/Users/foo` tree under home is in scope. **Why this matters:** test workspaces, scratch dirs, AI-generated example projects, and shared `/tmp` workdirs are full of stale `CLAUDE.md` files. The current discovery rule means every claw invocation can silently inherit context from arbitrary ancestor paths. Cross-references #438 (memory discovery only finds CLAUDE.md, not AGENTS.md or CLAW.md), #421 (cwd canonicalization leak — the canonicalized form determines which ancestor walk path is used). Source: Jobdori live dogfood, `f4a96740`, 2026-05-11.
440. **One invalid `mcpServers` entry blocks ALL OTHER valid MCP servers from loading — `mcp list --output-format json` returns `configured_servers: 0, servers: []` when even one server has a missing/invalid `command` field, despite other servers in the same config being well-formed; sibling: config parser halts on first invalid entry, never reports the remaining invalid entries** — dogfooded 2026-05-11 by Jobdori on `bd126905` in response to Clawhip pinpoint nudge at `1503343442904879156`. Reproduction: write `.claw.json` containing six `mcpServers` entries — one valid (`valid-server: {command:"/bin/echo", args:["hello"]}`) and five with progressive defects (missing-command, empty-command, null-command, wrong-type-command, extra-unknown-field). Run `claw mcp list --output-format json``{"action":"list","config_load_error":"/private/tmp/claw-mcp-probe/.claw.json: mcpServers.missing-command-server: missing string field command","configured_servers":0,"kind":"mcp","servers":[],"status":"degraded"}`. The error mentions only `missing-command-server` (the first invalid entry in JSON-object iteration order); the other four invalid entries are never surfaced. The valid `valid-server` entry is silently dropped because the parser bails on the first error. `status --output-format json` correctly propagates the same `config_load_error` and sets `status:"degraded"`, but no field tells automation which servers are valid vs broken — `servers:[]` is the only signal. **Three problems compounded:** (a) **all-or-nothing loading**: ROADMAP product principle #5 says "partial success is first-class," but mcp config loading is binary. One bad server kills the entire MCP plane; (b) **first-error-only reporting**: a `.claw.json` with five invalid entries surfaces only one error message — the user fixes that one and runs again, gets the next error, and so on. Five iterations needed to discover all errors; (c) **no per-server status**: even with the partial-success fix, the JSON envelope needs `servers:[{name, valid:bool, error?, command?, args?}]` so automation can see which entries are usable. **Required fix shape:** (a) the MCP config parser must collect ALL invalid entries into an `invalid_servers:[{name, error_field, reason}]` array and load all valid ones into `servers:[]`; do not abort on first error; (b) `configured_servers` reflects the count of *valid* loaded servers (not zero) when there are valid entries alongside invalid ones; (c) expose `total_configured:int` (count of entries in source `.claw.json`) AND `valid_count:int` (loaded), AND `invalid_count:int` (rejected) — three distinct counts; (d) `doctor --output-format json` adds an `mcp_validation` check that lists each invalid entry with its error message; (e) regression test: `.claw.json` with one valid + one invalid entry results in `configured_servers: 1, invalid_servers: [{name:"...", reason:"..."}]`. **Why this matters:** users iterate on MCP server lists during onboarding — one typo kills the entire plane, including servers they got working previously. The first-error-only reporting forces N iterations through N invalid entries instead of a single fix-everything-at-once pass. Cross-references #407 (config files no load_error per-file), #415 (config section merged_keys count only), #416 (plugins list prose), #428 (default permission mode), and Product Principle #5. Source: Jobdori live dogfood, `bd126905`, 2026-05-11.
441. **`hooks` config schema diverges from Claude Code documented format — claw-code expects `{"hooks":{"PreToolUse":["command-string"]}}` (array of command strings) while Claude Code documentation specifies `{"hooks":{"PreToolUse":[{"matcher":"Read","hooks":[{"type":"command","command":"..."}]}]}}` (structured matcher objects); users copy-pasting from Claude Code docs see `field "hooks.PreToolUse" must be an array of strings`** — dogfooded 2026-05-11 by Jobdori on `86ff83c2` in response to Clawhip pinpoint nudge at `1503350990680887418`. Reproduction: write `.claw.json` with the Claude-Code-documented hook format `{"hooks":{"PreToolUse":[{"matcher":"Read","hooks":[{"type":"command","command":"/bin/echo pretool"}]}]}}`. Run `claw status --output-format json``config_load_error: "/private/tmp/claw-hook-probe/.claw.json: field \"hooks.PreToolUse\" must be an array of strings, got an array (line 3)"`, `status: "degraded"`. The error wording ("must be an array of strings, got an array") is confusingly tautological — the user did provide an array; the parser objects that the array contains objects instead of strings. Replacing with the claw-code-actual format `{"hooks":{"PreToolUse":["/bin/echo pretool"]}}` succeeds: `config_load_error: null, status: "ok"`. The two formats are fundamentally incompatible: claw-code drops the `matcher` field (no tool-specific filtering at the config layer), drops the `type:"command"` discriminator (no future expansion to other hook types), and treats each entry as a bare command string instead of a structured hook spec. **Sibling: PR #3000 (justcode049) was attempting to tolerate object-style hook entries** — that PR's title `fix: tolerate object-style hook entries in config parser` confirms this is a known user complaint, but the PR is still conflicting and unmerged. **Three sibling findings in same probe:** (a) **unknown event names reject entire hooks config**: `.claw.json` with `hooks.InvalidEvent` (not a real event name like `PreToolUse`/`PostToolUse`/`Stop`/`Notification`) triggers `config_load_error: "unknown key \"hooks.InvalidEvent\""` and rejects ALL hooks in the same file, even valid ones — same "one bad apple kills all" pattern as #440 (MCP servers). (b) **`kind:"unknown"` for the validation error** — should be `kind:"invalid_hooks_config"` or `kind:"unknown_hook_event"` (catch-all cluster #422/#423/#424/#428/#430/#431/#432/#433/#435 — 13th occurrence). (c) **first-error-only halting**: a `.claw.json` with `hooks.Stop:"not-an-array"` (type mismatch) AND `hooks.InvalidEvent` (unknown name) AND `hooks.Notification:[{}]` (empty entry) surfaces only the FIRST error in iteration order — user must fix one at a time across 3 iterations. **Required fix shape:** (a) **adopt Claude Code's structured hook format as the canonical**: support `{matcher, hooks:[{type, command}]}` natively, with `matcher` for tool-filtering, `type` for hook-type discriminator (future-proof for `inline`/`webhook`/etc beyond just `command`); (b) **keep backward compat for bare command strings**: legacy `["command-string"]` arrays still load, but emit a deprecation warning suggesting migration to the structured form; (c) **partial-success loading**: invalid hook entries surface in `invalid_hooks:[{event, index, reason}]` while valid ones load — same fix as #440 for MCP; (d) **typed `kind:"invalid_hooks_config"` envelope** instead of `kind:"unknown"`; (e) **rebase and merge PR #3000** which addresses this directly; (f) regression test: Claude-Code-documented hook config loads without error on claw-code. **Why this matters:** users migrating from Claude Code to Claw Code hit this on their first `.claw.json` write. The error message ("array of strings, got an array") is unhelpful; the documentation doesn't surface the schema divergence; and Claude Code's structured format is strictly more expressive (matchers, types) than claw-code's bare-string format. Cross-references #407 (config files no load_error), #410 (list-envelope schema drift), #428 (default permission mode), #440 (one invalid MCP entry blocks all), PR #3000 (justcode049's pending fix). Source: Jobdori live dogfood, `86ff83c2`, 2026-05-11.
442. **`agents` discovery requires TOML format (`.toml` files) while Claude Code documents agents as Markdown with YAML frontmatter (`.md`) — claw-code silently ignores `.md` files in `.claw/agents/` without any warning; the help text lists `.claw/agents, ~/.claw/agents, $CLAW_CONFIG_HOME/agents` as sources but does not mention the `.toml` file format requirement** — dogfooded 2026-05-11 by Jobdori on `8499599b` in response to Clawhip pinpoint nudge at `1503358540230692876`. Reproduction: write `.claw/agents/valid-agent.md` with Claude-Code-format YAML frontmatter `---\nname: valid-agent\ndescription: A simple test agent\ntools: [bash, read_file]\n---\nYou are a helpful agent.` Run `claw agents list --output-format json``{"agents":[], "count":0, "summary":{"active":0,"shadowed":0,"total":0}}`. The valid `.md` agent is silently dropped. Replace with `.claw/agents/toml-agent.toml` containing TOML format `name = "toml-agent"\ndescription = "..."` → loads correctly with `count:1`. Source code confirms (`rust/crates/commands/src/lib.rs:3378`): `if entry.path().extension().is_none_or(|ext| ext != "toml") { continue; }` — only `.toml` extension is recognized, all others (including `.md`) skipped without warning. The help text `claw agents --help` documents the source paths but **omits the file-format requirement**. **Five sibling problems compounded:** (a) **schema divergence from Claude Code**: Claude Code's `agents` are documented as `.md` files with YAML frontmatter (matching the `CLAUDE.md`/`.claude/agents/` convention upstream). claw-code chose TOML for no documented reason. Users migrating from Claude Code or copy-pasting community agent definitions hit silent failure. (b) **silent file drop**: invalid agent files (wrong extension, broken frontmatter, missing required fields, file-name vs frontmatter-name mismatch) are all silently ignored with `count:0`. No `invalid_agents:[]` array, no warning, no `kind:"agent_load_failed"` envelope. Same all-or-nothing pattern as #440 (MCP servers) and #441 (hooks). (c) **no documentation of the schema**: `claw agents --help --output-format json` (per #427, this hits the auth gate; without auth it doesn't return the schema either). The required TOML fields (`name`, `description`, `model`, `model_reasoning_effort` per source code) aren't documented in any user-facing surface. (d) **missing `.claude/agents/` discovery**: many existing projects have `.claude/agents/` from Claude Code installs. claw-code only looks at `.claw/agents/` — users have to copy/move their existing agents. (e) **no agent-scaffolding command**: cross-reference #431 — there's no `claw agents create <name>` to generate a valid `.toml` skeleton; users must hand-craft. **Required fix shape:** (a) accept BOTH `.md` (with YAML frontmatter) AND `.toml` formats in `.claw/agents/`; prefer YAML frontmatter for Claude Code parity, keep TOML for back-compat; (b) include `.claude/agents/` in the discovery sources alongside `.claw/agents/` with documented precedence; (c) expose `invalid_agents:[{path, reason}]` array in `agents list --output-format json` so users can see what was skipped and why; (d) document the agent schema (required + optional fields) in `claw agents --help` and in USAGE.md; (e) add `claw agents create <name>` scaffolding command per #431; (f) regression test: `.claw/agents/foo.md` with YAML frontmatter loads correctly. **Why this matters:** agents are the primary extension surface for custom workflows. A silent-drop on the wrong file format breaks the discoverability promise of CLI agents. Claude Code's `.md`-with-YAML convention is the lingua franca across AI coding tools; deviating to TOML breaks copy-paste compatibility. Cross-references #430 (dump-manifests needs upstream), #431 (skills/agents lifecycle), #440 (MCP all-or-nothing), #441 (hooks all-or-nothing), #438 (memory file discovery only CLAUDE.md). Source: Jobdori live dogfood, `8499599b`, 2026-05-11.
443. **`claw acp serve` exits 0 with `status:"discoverability_only", supported:false` instead of failing — automation pipelines see "success" from a command that explicitly says "not implemented"; ROADMAP #413's internal-tracking leak (`discoverability_tracking:"ROADMAP #64a"`, `tracking:"ROADMAP #76"`) still present despite being filed 2026-04-30** — dogfooded 2026-05-11 by Jobdori on `19aaf9d0` in response to Clawhip pinpoint nudge at `1503366101533200435`. Reproduction: `claw acp serve --output-format json` returns exit code **0** with envelope `{aliases:["acp","--acp","-acp"], discoverability_tracking:"ROADMAP #64a", kind:"acp", launch_command:null, message:"ACP/Zed editor integration is not implemented in claw-code yet. \`claw acp serve\` is only a discoverability alias today; it does not launch a daemon or Zed-specific protocol endpoint. Use the normal terminal surfaces for now and track ROADMAP #76 for real ACP support.", recommended_workflows:["claw prompt TEXT","claw","claw doctor"], serve_alias_only:true, status:"discoverability_only", supported:false, tracking:"ROADMAP #76"}`. The exit code is 0 (success) but the command explicitly states it is not implemented. Pipeline like `claw acp serve && zed --connect localhost:12345` will proceed to the zed connect step despite `acp serve` being a no-op. The only signal of no-op is `supported:false` in the JSON body — easy to miss for automation gating on `$?`. **ROADMAP #413 reproduction confirmed unfixed:** #413 (filed 2026-04-30) called out `discoverability_tracking:"ROADMAP #64a"` and `tracking:"ROADMAP #76"` as internal ticket references leaked into public JSON. **11 days later, both fields are still present in the envelope.** The fix was prescribed but never landed. Also `recommended_workflows:["claw prompt TEXT","claw","claw doctor"]` is internal scaffolding (curated suggestion list) exposed as a top-level public field — not normally part of an "ACP status" public contract. **Sibling unknown-subcommand bug:** `claw acp status --output-format json` (a reasonable next-thing-to-try) returns `{"error":"unsupported ACP invocation. Use \`claw acp\`, \`claw acp serve\`, \`claw --acp\`, or \`claw -acp\`.","kind":"unknown"}` exit 0 — the `kind:"unknown"` catch-all yet again (#422/#423/#424/#428/#430/#431/#432/#433/#435/#440/#441/#442 — **14th occurrence**), should be `kind:"unsupported_acp_invocation"`. **Required fix shape:** (a) `claw acp serve` exits **non-zero** (exit code 2 = "not implemented" is conventional) so automation `$?`-gating detects the no-op; (b) deliver #413's fix: remove `discoverability_tracking` and `tracking` top-level fields, OR move them under an optional `_meta` sub-object gated on a debug flag; (c) replace `message` prose with a typed `reason:"not_implemented"` enum + optional `detail` string for downstream pipelines that need a stable signal; (d) drop `recommended_workflows` from the ACP envelope OR move it under `_meta`; (e) the `status:"discoverability_only"` value is non-standard — replace with `status:"not_implemented"` (matching the `supported:false` boolean); (f) typed `kind:"unsupported_acp_invocation"` for the bad-arg path. **Why this matters:** ACP/Zed integration is the integration point for IDE-based AI workflows. A "success" exit code on a "not implemented" stub breaks the contract for any wrapper script that tries to detect ACP availability via `claw acp serve && ...`. The internal-tracking-ID leak (#413) being unfixed for 11 days suggests the JSON envelope audit isn't being executed against the ROADMAP backlog. Cross-references #413 (internal tracking leak — unfixed), #422 (exit-code parity), `kind:"unknown"` catch-all cluster. Source: Jobdori live dogfood, `19aaf9d0`, 2026-05-11.
444. **No broad-cwd safety guard for `--resume` — `claw --resume latest` from `/` attempts to `mkdir /.claw/sessions/<fingerprint>/` and is only stopped by the read-only filesystem at root; from any writable system directory (`/tmp`, `/var/tmp`, `$HOME` itself) it silently creates `.claw/sessions/<fingerprint>/` droppings; exit code is 0 (success) on the read-only filesystem error path** — dogfooded 2026-05-11 by Jobdori on `b2048856` in response to Clawhip pinpoint nudge at `1503373639884607629`. Reproduction: `cd / && claw --resume latest --output-format json` returns `{"error":"failed to restore session: Read-only file system (os error 30)","hint":null,"kind":"session_load_failed","type":"error"}` exit **0**. The OS permission denial is the only thing preventing claw from creating `/.claw/sessions/<fingerprint>/` in the root filesystem. Compare with `cd /tmp && claw --resume latest --output-format json`: silently creates `/tmp/.claw/sessions/<fingerprint>/` partition (confirmed by `ls /tmp/.claw` showing a directory from a prior dogfood session at `13:31` — the May 11 11:00 pinpoint #435 dropping is still there 10+ hours later, despite documented cleanup). Same dogfood session: `cd $HOME && claw --resume latest` would silently create `~/.claw/sessions/<fingerprint>/` (the user's home claw config dir). The shorthand prompt path has a broad-cwd guard (`claw is running from a very broad directory (/). The agent can read and search everything under this path. Use --allow-broad-cwd to proceed anyway`) — but the guard does NOT fire on `--resume`, `--status`, or `claw status` invocations. Inconsistent safety surface: the dangerous path (LLM prompt with full tool access) has a guard, but session-management paths that create filesystem artifacts in broad locations have none. **Three sibling findings in same probe:** (a) **exit-code 0 on filesystem error** (`session_load_failed` envelope returns exit code 0): the read-only-filesystem error from `/.claw` creation path is an unrecoverable failure but the process exits 0 — same exit-parity bug as #422/#435; (b) **stale filesystem droppings**: `/tmp/.claw/` from a 13:31 dogfood session at HEAD `6c0c305a` is still present at 21:30 (10 hours later, 6+ HEADs later). The "deferred cleanup" or "lazy creation" fix prescribed in #435 hasn't landed; (c) **broad-cwd guard misfires on resume**: the existing guard from `run` path (visible in `claw --help` as "Use --allow-broad-cwd to proceed anyway") never fires on `--resume`. Either both paths should guard, or the guard should be promoted to a global pre-check. **Required fix shape:** (a) extend the broad-cwd guard to `--resume`, `claw status`, `claw doctor`, and every command that may create filesystem artifacts; `cd / && claw --resume latest` must fail fast with `kind:"broad_cwd_blocked"` before any filesystem operation; (b) `cd $HOME && claw` should warn that the workspace is your home directory and ask for `--allow-broad-cwd` (the LLM with full filesystem access in `$HOME` is the same blast radius as in `/`); (c) exit code 1 for `session_load_failed` regardless of underlying cause; (d) deliver #435's "defer fingerprint directory creation to first successful save" fix — failed `--resume` must not leave filesystem droppings; (e) cleanup `/tmp/.claw/` style scratch-dir artifacts via a `claw doctor --cleanup` or similar opt-in mechanism; (f) regression test: failed `--resume` does not create any directories under cwd. **Why this matters:** users running claw as part of CI/cron from system directories silently accumulate `.claw/sessions/<fingerprint>/` artifacts in /tmp, /var, /opt, $HOME, etc. Running as root from / would (with a writable root) silently pollute the root filesystem. The broad-cwd guard exists but only covers one entry point. Cross-references #427 (broad-cwd guard fires on resume too — actually it doesn't, that note in #427 was inaccurate), #428 (default permission_mode danger-full-access — compounds with this: full access + no broad-cwd guard = serious blast radius), #435 (filesystem side effects on failed resume), #422 (exit-code parity). Source: Jobdori live dogfood, `b2048856`, 2026-05-11.
445. **Skill name-vs-directory mismatch is silently accepted — `.claw/skills/wrong-name/SKILL.md` with frontmatter `name: actually-different-name` loads as "actually-different-name" without any warning; users who reference the skill by directory name (`claw skills run wrong-name`) get `skill_not_found` while `skills list` shows it under the frontmatter name; sibling: loose `.md` files at the skills-dir root and subdirs without `SKILL.md` are silently dropped** — dogfooded 2026-05-11 by Jobdori on `9e1eafd0` in response to Clawhip pinpoint nudge at `1503381189539528897`. Reproduction: create `.claw/skills/wrong-name/SKILL.md` with frontmatter `---\nname: actually-different-name\ndescription: Skill where dir name and frontmatter name disagree\n---`. Run `claw skills list --output-format json` → the skill is listed with `name: "actually-different-name"` (the frontmatter value), no warning about the dir-vs-name mismatch. Users who type `claw skills run wrong-name` (the dirname they know from `ls`) get a `skill_not_found` error; `claw skills run actually-different-name` works. The two names are decoupled with no surfaced relationship. **Three sibling silent-drop bugs in same probe:** (a) **subdir without SKILL.md silently skipped**: `.claw/skills/no-skill-md/` containing only `README.md` (no `SKILL.md`) is silently skipped from `skills list`. No `invalid_skills:[{path, reason:"missing_SKILL.md"}]` array, no warning, just absent from output. (b) **Loose `.md` at skills dir root silently dropped**: `.claw/skills/loose-skill.md` (not inside a per-skill subdirectory) is silently ignored. Discovery only walks `.claw/skills/*/SKILL.md` — no support for flat `.claw/skills/<name>.md`. (c) **Workspace + user skills merged without per-source filter**: `skills list` returns 74 entries including all `~/.claw/skills/*` user-home skills alongside the project skills. There's no `--scope workspace` flag to limit output to just project-local skills; automation has to filter by `source.id == "project_claw"` post-hoc. **Required fix shape:** (a) when SKILL.md frontmatter `name` differs from the parent directory name, emit a `skills_metadata_drift:[{dir_name, frontmatter_name, path}]` array OR enforce `name = dir_name` as a hard rule; if neither, at minimum a stderr warning on each invocation; (b) skill subdirectories without `SKILL.md` should surface as `invalid_skills:[{path, reason}]` in `skills list --output-format json` (same pattern as #440 MCP servers, #441 hooks, #442 agents); (c) support loose `.md` files at skills-dir root OR document explicitly that only subdirectories with `SKILL.md` are discovered; (d) add `--scope workspace|user|all` flag to `skills list` for filtering; (e) regression test: dir/frontmatter mismatch triggers a deterministic warning or error; subdirs without SKILL.md show in invalid array. **Why this matters:** skill discovery is a security-relevant surface — a user's `claw skills run X` could end up running a different skill than they thought if dir-name and frontmatter-name diverge. The silent drops mean users can't tell why their skill files aren't recognized, leading to "I copied the example and it doesn't work" forum questions. Cross-references #440 (MCP all-or-nothing), #441 (hooks all-or-nothing), #442 (agents need TOML, .md dropped), #431 (skills install raw OS error). Source: Jobdori live dogfood, `9e1eafd0`, 2026-05-11.
446. **Config is loaded 2-3 times per command invocation; each load re-emits identical deprecation warnings without deduplication — `status` triggers 3× `enabledPlugins` warning, `doctor`/`mcp` trigger 2× each, only `version` (config-free) emits 0** — dogfooded 2026-05-11 by Jobdori on `5a4cc506` in response to Clawhip pinpoint nudge at `1503388740595224717`. Reproduction: with a `~/.claw/settings.json` containing the deprecated `enabledPlugins` key, run each command from a fresh empty cwd and count `warning: ... is deprecated` lines on stderr — `claw status 2>&1 >/dev/null | grep -c deprecated` returns **3**, `claw doctor` returns **2**, `claw mcp` returns **2**, `claw version` returns **0**. Each duplicate is byte-identical (same file path, same line number, same field name). The pattern proves the config-load pipeline is invoked 2-3 times within a single command process; warnings are emitted at each load without checking a `warned_files: HashSet<PathBuf>` deduplication set. **Three sibling implications:** (a) **load-count varies by command** — status:3, doctor:2, mcp:2, version:0 — suggesting each command implements its own config-load call rather than going through a shared cached loader; (b) **noise pollution**: users running `claw status` once see the same 64-character warning 3 times in their terminal scrollback, making real warnings (other config errors, real deprecations) lost in the duplicate noise; (c) **performance signal**: 3× config load means 3× JSON parsing of `~/.claw/settings.json`, `~/.claw.json`, `$CLAW_CONFIG_HOME/settings.json`, and the project-local `.claw.json` / `.claw/settings.json` / `.claw/settings.local.json`. For a workspace with 5 config files, that's 15 redundant disk reads per status invocation. Earlier roadmap entries observed 3× (#424) and 4× (#425) warning counts at different HEADs; the count keeps fluctuating, suggesting the underlying issue is config-load fan-out that nobody has refactored. **Required fix shape:** (a) introduce a `ConfigLoader` cache scoped to the command-process lifetime: first load reads files and emits warnings; subsequent calls hit the cache and emit zero warnings; (b) move config validation/warnings to a single canonical entry point (`ConfigLoader::load_with_diagnostics()` returns `(RuntimeConfig, Vec<Warning>)` exactly once); (c) every command that needs config goes through the cached loader instead of re-reading from disk; (d) `doctor --output-format json` exposes `config_load_count:int` field so we can regression-test that loads are deduplicated; (e) regression test: any single command invocation emits each deprecation warning at most once. **Why this matters:** repeated identical warnings train users to ignore stderr noise. Real warnings (a new deprecation, a config error from a different file, an MCP server failure) get drowned out by 3-4 copies of the same notice. The 15-disk-read worst case is wasted I/O that adds startup latency. The fact that count fluctuates between HEADs (3 at `6c0c305a`, 4 at `d7dbe951`, back to 3 at `5a4cc506`) suggests dev velocity is moving config loads around without an architectural fix. Cross-references #424 (deprecation warning 3×), #425 (deprecation warning 4×), #421 (cwd canonicalization — possibly tied to per-load symlink resolution), #428 (default permission_mode loaded from same config files). Source: Jobdori live dogfood, `5a4cc506`, 2026-05-11.
447. **All JSON error envelopes go to STDERR not STDOUT; stdout is empty (0 bytes) on every `--output-format json` failure — breaks the standard automation pattern `output=$(claw cmd --output-format json)` which captures nothing on error and forces ugly `2>&1` redirects to even see the JSON** — dogfooded 2026-05-11 by Jobdori on `5ab969e7` in response to Clawhip pinpoint nudge at `1503396289071808523`. Reproduction (stderr-vs-stdout discipline audit): `claw --no-such-flag --output-format json >stdout.txt 2>stderr.txt` → stdout = **0 bytes**, stderr = 115 bytes containing `{"error":"unknown option: --no-such-flag","hint":"Run \`claw --help\` for usage.","kind":"cli_parse","type":"error"}`. Same pattern across four error envelopes probed: (a) `cli_parse` → stdout 0 / stderr 115; (b) `missing_credentials` → stdout 0 / stderr 853 (includes deprecation warnings ahead of envelope); (c) `session_load_failed` → stdout 0 / stderr 322; (d) `invalid_model_syntax` → stdout 0 / stderr 199. Success paths route correctly: `claw status --output-format json` → stdout 1496 / stderr 0. **The asymmetry is wrong on two axes:** (a) **JSON-format outputs should always go to stdout regardless of success/failure**: every major CLI in this class (kubectl, gh, aws, jq, terraform `-json`, `npm --json`) emits JSON on stdout for both ok and error paths; consumers parse `stdout | jq .kind` and switch on the kind to detect errors. claw's split forces consumers to capture both streams or use `2>&1` which then includes deprecation prose alongside the JSON envelope and breaks parsing. (b) **Deprecation/info warnings leak into the JSON error envelope on stderr**: when stderr is the only path to get the JSON, the deprecation warning prefix (`warning: ... enabledPlugins ... is deprecated`) precedes the JSON, making `tail -1 stderr.txt | jq .` fragile. **Three sibling problems:** (i) **breaks the canonical Bash idiom** `if ! output=$(cmd --output-format json); then echo "$output" | jq .error; fi` — `$output` is empty on error so the `jq` call sees nothing. (ii) **forces N-line stderr parsing**: to get the JSON envelope from stderr, automation must read until EOF, then skip leading `warning:` lines, then parse only the last `{...}` JSON. This is a brittle heuristic that breaks if more warnings are added. (iii) **inconsistent with text mode**: text-mode error output ALSO goes to stderr (e.g., `claw --no-such-flag` → stderr `[error-kind: cli_parse]\nerror: ...`) — that's correct for text mode (stderr is the diagnostic channel). The bug is JSON mode inheriting the same routing. **Required fix shape:** (a) JSON error envelopes go to STDOUT when `--output-format json` is active; (b) keep text-mode error output on stderr (no change for text path); (c) deprecation/info warnings should ALSO go to stderr in JSON mode (they're diagnostic prose, not part of the JSON contract) — separate channels: JSON envelope on stdout, prose warnings on stderr; (d) add `--quiet` / `--no-warn` flag to fully suppress stderr warnings for clean automation; (e) regression test: every `--output-format json` failure path emits the JSON envelope on stdout, exit non-zero, no JSON ever on stderr. **Why this matters:** the entire point of `--output-format json` is enabling automation. Splitting JSON success vs error across stdout vs stderr defeats the purpose — automation must capture both, dedupe sources, and parse mixed streams. Cross-references #422 (exit-code parity across error envelopes), #424 (deprecation warnings noise), #428 (envelope vs prose tension), #446 (multi-load deprecation duplication). Source: Jobdori live dogfood, `5ab969e7`, 2026-05-11.
448. **`sandbox --output-format json` has contradictory state flags — `enabled:true, supported:false, active:false, filesystem_active:true, allowed_mounts:[]`: claim that sandbox is "enabled" while OS doesn't support namespace isolation and `allowed_mounts:[]` is empty contradicts `filesystem_active:true filesystem_mode:"workspace-only"`** — dogfooded 2026-05-11 by Jobdori on `7244a82b` in response to Clawhip pinpoint nudge at `1503403842920779917` (using fresh-current-main runner at `/tmp/claw-dog-1430` per gajae's 14:00 protocol switch). Reproduction: `claw sandbox --output-format json` on macOS (where `unshare` is unavailable) returns `{"active":false,"active_namespace":false,"active_network":false,"allowed_mounts":[],"enabled":true,"fallback_reason":"namespace isolation unavailable (requires Linux with \`unshare\`)","filesystem_active":true,"filesystem_mode":"workspace-only","in_container":false,"kind":"sandbox","markers":[],"requested_namespace":true,"requested_network":false,"supported":false}`. **Three contradictions in the same envelope:** (a) `enabled:true` AND `supported:false`: what does "enabled" mean if the OS doesn't support sandboxing? Read literally, sandbox is *enabled but unsupported* — semantic nonsense. The likely intent is "user requested sandbox in config" but the field name `enabled` says "is ON". A better name would be `requested:true` or `config_intent:true`, with `enabled` reserved for the actually-active state. (b) `filesystem_active:true, filesystem_mode:"workspace-only"` AND `allowed_mounts:[]`: if the filesystem fence is active in workspace-only mode, the workspace directory itself MUST be an allowed mount. An empty `allowed_mounts:[]` array combined with `filesystem_active:true` means either (i) the fence is being misreported (it's not really active), (ii) the workspace is implicit and `allowed_mounts` only lists *additional* mounts, or (iii) the fence has no allowed paths and nothing is readable — all three are inconsistent with the user-facing summary. (c) `active:false` AND `filesystem_active:true`: the top-level `active` field is a single boolean summary, but it disagrees with `filesystem_active:true` (one component is active). Either `active` is "all components active" (then it should be `false` when any component is off) or "any component active" (then it should be `true` when filesystem is). The current value is `false` despite filesystem being active. **Sibling: no `claw sandbox --help`**: `claw sandbox status` and `claw sandbox --help` go to LLM-prompt fallback or hang (gajae confirmed at 13:00 that `sandbox status` returns typed `cli_parse` but `sandbox --help` is bounded — schema is non-uniform across help paths). **Required fix shape:** (a) rename `enabled` to `requested` or `config_intent` to disambiguate from "currently active"; (b) make `allowed_mounts` explicitly include the workspace when filesystem_mode is "workspace-only" (`allowed_mounts:[{path:"<cwd>",writable:true,reason:"workspace_root"}]`); (c) document the `active` aggregate semantics: pick either "all" or "any" composition rule and document the choice; (d) add `active_components:["filesystem"]` array as a richer alternative to the single boolean — surfaces exactly which sandbox subsystems are live; (e) regression test: when `filesystem_mode == "workspace-only"`, `allowed_mounts` MUST contain the cwd and `active` must agree with the documented composition rule. **Why this matters:** sandbox is the trust surface — automation that checks `sandbox.active == true` before running a risky LLM prompt sees `false` (no namespace, no network) and assumes no isolation, but `filesystem_active:true` means there IS partial isolation. The mixed signal forces consumers to OR all `*_active` fields together. Cross-references #428 (default permission_mode=danger-full-access — paired with sandbox-not-active means zero isolation), #444 (no broad-cwd guard — sandbox is the only safety net and its status is unclear). Source: Jobdori live dogfood, `7244a82b`, 2026-05-11.

View File

@@ -0,0 +1,185 @@
# G002 alpha security map and verification plan
Generated by `worker-4` for OMX team task 5 on 2026-05-14.
## Scope and coordination
- Active goal context: `G002-alpha-security` / Stream 6 day-one security and permissions gate.
- Worker ownership: `worker-1` owns minimal implementation changes for workspace/path enforcement. `worker-4` owns this repository map, integration verification plan, changed-file/commit report, and exact verification evidence.
- Boundary: this report does not mutate `.omx/ultragoal` and does not edit shared security/path tests.
- Parallel probe status: three native subagents were spawned for repository map, test probe, and change-slice probe, but all failed before returning findings with `429 Too Many Requests`; local mapping below is based on direct repository inspection.
## Current permission and path enforcement map
### Runtime permission policy and enforcer
- `rust/crates/runtime/src/permissions.rs`
- Owns the `PermissionMode` ordering and `PermissionPolicy` authorization contract.
- Existing tests cover read-only denial, workspace-write escalation, prompt approvals/denials, danger-full-access allowance, override recording, and required-mode reporting.
- Integration risk: any new dynamic file/path rule must preserve the existing `PermissionPolicy::authorize` semantics so prompt/override audit events remain stable.
- `rust/crates/runtime/src/permission_enforcer.rs`
- `PermissionEnforcer::check`, `check_with_required_mode`, `check_file_write`, and `check_bash` convert policy outcomes into structured `EnforcementResult` payloads.
- `check_file_write` currently has the direct write gate for workspace-write mode.
- `is_within_workspace` is a string-prefix boundary check after simple relative-path joining; it does not canonicalize symlinks, `..`, Windows drive prefixes, or case variants.
- Existing tests cover read-only denial, workspace-write inside/outside paths, trailing slashes, root equality, bash read-only heuristics, prompt-mode denial payloads, and structured denied fields.
### File tool path handling
- `rust/crates/runtime/src/file_ops.rs`
- `read_file`, `write_file`, and `edit_file` normalize paths before filesystem operations but do not themselves require a workspace root.
- `read_file_in_workspace`, `write_file_in_workspace`, and `edit_file_in_workspace` exist as boundary-enforced wrappers.
- `validate_workspace_boundary` canonicalizes through the caller-provided resolved path and checks `starts_with(workspace_root)`.
- `is_symlink_escape` detects direct symlink escapes by comparing canonical target to canonical workspace root.
- Search tools (`glob_search`, `grep_search`) derive walk roots and prune heavy directories, but they are separate from the write enforcement path.
- Existing tests cover oversized/binary reads, workspace-boundary read rejection, symlink escape detection, glob brace expansion, ignored directories, and grep/glob behavior.
### Bash command validation
- `rust/crates/runtime/src/bash_validation.rs`
- `validate_command` runs mode validation, sed validation, destructive warning checks, then path validation.
- `validate_read_only` blocks write-like commands, state-modifying commands, write redirects, and mutating git subcommands in read-only mode.
- `validate_mode` warns when workspace-write commands appear to target hard-coded system paths.
- `validate_paths` warns for `../`, `~/`, and `$HOME` references; it is intentionally heuristic and does not resolve shell expansion or canonical targets.
- Existing tests cover read-only blockers, destructive warnings, sed in-place blocking, path traversal/home warnings, command classification, and full pipeline allow/block/warn outcomes.
### Sandbox and diagnostics surfaces
- `rust/crates/runtime/src/sandbox.rs`
- Owns container/sandbox status detection and workspace-only sandbox command construction.
- Relevant for day-one security because sandbox status must not overstate filesystem isolation.
- `rust/crates/rusty-claude-cli/src/main.rs`
- Owns CLI permission-mode parsing, direct JSON/text diagnostic output, `/permissions`, `/status`, `/doctor`, and command dispatch paths.
- Existing CLI integration tests under `rust/crates/rusty-claude-cli/tests/` cover permission prompt scenarios and output-format contracts.
- `rust/crates/rusty-claude-cli/tests/mock_parity_harness.rs`
- End-to-end harness includes `bash_permission_prompt_approved`, `bash_permission_prompt_denied`, read/write file allow/deny, and plugin workspace-write scenarios.
## Existing G002-adjacent coverage
- Unit-level permission coverage:
- `cargo test -p runtime permissions::tests`
- `cargo test -p runtime permission_enforcer::tests`
- `cargo test -p runtime bash_validation::tests`
- `cargo test -p runtime file_ops::tests`
- CLI and integration coverage:
- `cargo test -p rusty-claude-cli --test mock_parity_harness`
- `cargo test -p rusty-claude-cli --test output_format_contract`
- `cargo test -p rusty-claude-cli --test cli_flags_and_config_defaults`
- Board/report validation coverage:
- `python3 scripts/validate_cc2_board.py --board .omx/cc2/board.json`
- `python3 .omx/cc2/validate_issue_parity_intake.py .omx/cc2/issue-parity-intake.json`
## Recommended safe work slices
### Implementation lane (owned by worker-1 unless re-scoped)
1. Replace string-prefix workspace boundary checks with canonical path comparison in the runtime enforcement path.
- Primary files: `rust/crates/runtime/src/permission_enforcer.rs`, possibly shared helper extraction from `rust/crates/runtime/src/file_ops.rs`.
- Regression cases: `../` traversal, symlink escape, root prefix collision (`/workspace` vs `/workspacex`), relative paths, trailing slash root equality.
2. Ensure direct file tools call workspace-aware wrappers when active permission mode is `workspace-write`.
- Primary files: likely `rust/crates/runtime/src/mcp_tool_bridge.rs` and/or the runtime tool execution bridge that calls `file_ops`.
- Regression cases: direct read/write paths, missing parent creation, symlink parent escape, and error payload stability.
3. Keep bash validation as a warning/classification layer unless a real shell-expansion resolver is introduced.
- Primary files: `rust/crates/runtime/src/bash_validation.rs`, `rust/crates/runtime/src/bash.rs`.
- Risk: heuristic parsing cannot faithfully resolve shell expansion, globs, aliases, or platform-specific path rules; avoid claiming hard enforcement unless execution sandbox or command resolver proves it.
### Test lane (coordinate with worker-3/worker-1 before editing)
1. Add unit regressions close to each enforcement function before changing behavior.
- `permission_enforcer.rs`: canonical path boundary and Windows-shaped path cases.
- `file_ops.rs`: write/edit workspace wrappers with symlink parent escapes and missing file parent canonicalization.
- `bash_validation.rs`: shell expansion/glob/path warnings remain warnings unless a resolver is introduced.
2. Add at least one integration test proving the runtime bridge actually routes file tools through workspace enforcement, not only helper functions.
- Candidate: `rust/crates/rusty-claude-cli/tests/mock_parity_harness.rs` for direct write denial and no file created outside workspace.
3. Preserve existing prompt/event visibility tests.
- Candidate surfaces: permission prompt scenarios in `mock_parity_harness.rs`, status/doctor JSON in `output_format_contract.rs`.
### Docs/reporting lane (owned by worker-4)
1. Keep this file as the integration handoff artifact for G002 mapping and verification.
2. Report changed files and commits relative to `origin/main` so the leader can integrate worker branches deterministically.
3. Include exact command evidence in the task lifecycle result.
## Changed files relative to `origin/main` at map time
The worktree currently contains these files added relative to `origin/main` before this task report:
- `.omx/cc2/board.json`
- `.omx/cc2/board.md`
- `.omx/cc2/issue-parity-intake.json`
- `.omx/cc2/issue-parity-intake.md`
- `.omx/cc2/render_board_md.py`
- `.omx/cc2/validate_issue_parity_intake.py`
- `scripts/cc2_board.py`
- `scripts/generate_cc2_board.py`
- `scripts/validate_cc2_board.py`
This task adds:
- `docs/g002-security-verification-map.md`
## Commits relative to `origin/main` at map time
- `8311655``omx(team): auto-checkpoint worker-1 [1]`
- `c6e2a7d``omx(team): merge worker-1`
- `481585f``omx(team): auto-checkpoint worker-1 [1]`
- `74bbf4b``omx(team): auto-checkpoint worker-4 [unknown]`
- `5c77896``omx(team): auto-checkpoint worker-1 [1]`
- `07dad88``Classify issue and parity intake for CC2 board integration`
- `424825f``task: G001 human board and docs rendering`
- `d15268e``Create a canonical CC2 board so every frozen ROADMAP heading is verifiably mapped`
- `45b43b5``Make the CC2 board schema executable for G001`
## Verification checklist for leader integration
Run these from the repository root unless noted:
1. Python board/schema validation:
- `python3 scripts/validate_cc2_board.py --board .omx/cc2/board.json`
- `python3 .omx/cc2/validate_issue_parity_intake.py .omx/cc2/issue-parity-intake.json`
2. Rust formatting and lint/type checks:
- `scripts/fmt.sh --check`
- `(cd rust && cargo check --workspace)`
- `(cd rust && cargo clippy --workspace --all-targets -- -D warnings)`
3. Targeted G002 security tests:
- `(cd rust && cargo test -p runtime permissions::tests permission_enforcer::tests bash_validation::tests file_ops::tests)`
- `(cd rust && cargo test -p rusty-claude-cli --test mock_parity_harness)`
4. Full regression:
- `(cd rust && cargo test --workspace)`
## Worker-4 verification evidence (2026-05-14)
PASS:
- `python3 scripts/validate_cc2_board.py --board .omx/cc2/board.json``PASS cc2 board validation`; 729 items; ROADMAP headings `124/124`; ROADMAP actions `542/542`.
- `python3 .omx/cc2/validate_issue_parity_intake.py .omx/cc2/issue-parity-intake.json``PASS issue/parity intake: 19 issue rows, 9 parity rows`.
- `scripts/fmt.sh --check` → no output and zero exit before Rust checks continued.
- `(cd rust && cargo check --workspace)``Finished dev profile` successfully.
- `(cd rust && cargo test -p runtime permissions::tests)` → 9 passed.
- `(cd rust && cargo test -p runtime permission_enforcer::tests)` → 21 passed.
- `(cd rust && cargo test -p runtime bash_validation::tests)` → 32 passed.
- `(cd rust && cargo test -p runtime file_ops::tests)` → 14 passed.
- `(cd rust && cargo test -p rusty-claude-cli --test mock_parity_harness)` → 1 passed.
FAIL / integration blockers observed on this worktree:
- `(cd rust && cargo clippy --workspace --all-targets -- -D warnings)` failed in existing runtime code, not this docs-only task:
- `rust/crates/runtime/src/compact.rs:215` / `:216`: `clippy::match_same_arms`.
- `rust/crates/runtime/src/policy_engine.rs:5`: `clippy::duration-suboptimal-units`.
- `rust/crates/runtime/src/sandbox.rs:295-302`: `clippy::map_unwrap_or`.
- `(cd rust && cargo test --workspace)` failed after broad success in API/commands/plugins/runtime tests because `rusty-claude-cli` unit test `tests::session_lifecycle_prefers_running_process_over_idle_shell` asserted `RunningProcess` but observed `IdleShell`.
- Rerun of the specific failing test confirmed deterministic failure: `(cd rust && cargo test -p rusty-claude-cli --bin claw tests::session_lifecycle_prefers_running_process_over_idle_shell -- --exact --nocapture)` → 0 passed, 1 failed with the same `IdleShell` vs `RunningProcess` assertion.
Recommended owner for failures: not `worker-4` unless re-scoped. These failures are outside the docs/report artifact and touch shared runtime/CLI implementation files.

View File

@@ -0,0 +1,96 @@
# G003 boot/session/preflight verification map
Generated by `worker-1` for OMX team task 2 on 2026-05-14.
## Scope and coordination
- Active goal context: `G003-boot-session` / Stream 1 reliable worker boot and session control.
- Boundary: this artifact is an audit/integration map only. It does not mutate `.omx/ultragoal` and it does not change shared implementation or tests.
- Current worker split from leader mailbox:
- `worker-1`: task 1 worker boot / prompt SLA plus this task 2 audit map.
- `worker-2`: default trusted roots / trust resolver.
- `worker-3`: startup-no-evidence classifier.
- `worker-4`: session control plus preflight/doctor JSON surfaces.
- Native subagent probes were attempted for Task 2 (`test probe` and `debug/root-cause probe`) but both failed before returning findings with `429 Too Many Requests`; the map below is based on direct repository inspection.
## Implementation surface map
### Worker boot lifecycle and prompt SLA
- `rust/crates/runtime/src/worker_boot.rs`
- Core state types: `WorkerStatus`, `WorkerFailureKind`, `WorkerEventKind`, `WorkerEventPayload`, `StartupFailureClassification`, `StartupEvidenceBundle`, `WorkerTaskReceipt`, and `WorkerReadySnapshot`.
- Control plane: `WorkerRegistry::{create,get,observe,resolve_trust,send_prompt,await_ready,restart,terminate,observe_completion,observe_startup_timeout}`.
- Lifecycle states currently covered in code: `spawning`, `trust_required`, `tool_permission_required`, `ready_for_prompt`, `running`, `finished`, and `failed`.
- Prompt delivery semantics currently use `Running` events and fields `prompt_in_flight`, `last_prompt`, `expected_receipt`, `replay_prompt`, and `prompt_delivery_attempts`.
- Startup-no-evidence surface: `observe_startup_timeout` builds `StartupEvidenceBundle` and classifies trust, tool permission, prompt acceptance timeout, prompt misdelivery, transport death, worker crash, or unknown.
- File observability surface: `emit_state_file` writes `.claw/worker-state.json` with status, readiness, trust state, prompt-in-flight flag, last event, and update age.
- `rust/crates/tools/src/lib.rs`
- Tool APIs expose the worker control plane through `WorkerCreate`, `WorkerGet`, `WorkerObserve`, `WorkerResolveTrust`, `WorkerAwaitReady`, `WorkerSendPrompt`, `WorkerRestart`, `WorkerTerminate`, and `WorkerObserveCompletion`.
- `WorkerCreate` merges `ConfigLoader::trusted_roots()` with per-call `trusted_roots` before calling `WorkerRegistry::create`.
- Tool-level tests exercise worker create/observe/send/restart/terminate/completion and state-file transitions.
### Trust resolver and default trusted roots
- `rust/crates/runtime/src/trust_resolver.rs`
- `TrustConfig`, `TrustAllowlistEntry`, and `TrustResolver` model trust prompts, allowlist/denylist policy, auto-trust, manual approval, and emitted trust events.
- `path_matches_trusted_root` and internal `path_matches` canonicalize paths when possible.
- Hazard: prefix matching must avoid accidental sibling matches such as `/tmp/work` matching `/tmp/work-evil`; worker-2 owns any changes here.
- `rust/crates/runtime/src/config.rs`
- `trustedRoots` is parsed by `parse_optional_trusted_roots` and exposed through `RuntimeConfig::trusted_roots()` / feature config accessors.
- Current default is empty when unset; any project default roots work belongs to worker-2.
### Session control
- `rust/crates/runtime/src/session_control.rs`
- `SessionStore` namespaces sessions by canonical workspace fingerprint.
- Key API: `from_cwd`, `from_data_dir`, `create_handle`, `resolve_reference`, `resolve_managed_path`, `list_sessions`, `latest_session`, `load_session`, and `fork_session`.
- Guardrail: `validate_loaded_session` rejects cross-workspace sessions and allows legacy sessions only when their path remains inside the current workspace.
- Worker-4 owns changes to this lane.
### CLI doctor/status/preflight and bootstrap-adjacent surfaces
- `rust/crates/commands/src/lib.rs`
- Slash command definitions include `/status`, `/sandbox`, and `/doctor`.
- JSON rendering for command surfaces exists through handler functions and tests in the same module.
- `rust/crates/tools/src/lib.rs`
- Bash and PowerShell tool runners include `workspace_test_branch_preflight`, which returns structured output with `return_code_interpretation: preflight_blocked:branch_divergence` for broad workspace tests on stale branches.
- Tests around `bash_workspace_tests_are_blocked_when_branch_is_behind_main` and targeted-test skipping protect this preflight behavior.
## Existing focused verification commands
Run from `rust/` unless noted.
- Worker boot runtime contract:
- `cargo test -p runtime worker_boot -- --nocapture`
- Worker tool API contract:
- `cargo test -p tools worker_ -- --nocapture`
- Session control contract:
- `cargo test -p runtime session_control -- --nocapture`
- Trust resolver/config trusted roots:
- `cargo test -p runtime trust_resolver -- --nocapture`
- `cargo test -p runtime config::tests::parses_trusted_roots_from_settings config::tests::trusted_roots_default_is_empty_when_unset -- --nocapture`
- Preflight/tool branch guardrails:
- `cargo test -p tools bash_workspace_tests_are_blocked_when_branch_is_behind_main bash_targeted_tests_skip_branch_preflight -- --nocapture`
- Formatting/type/lint baseline:
- `../scripts/fmt.sh --check`
- `cargo check -p runtime -p tools -p commands`
- `cargo clippy -p runtime -p tools -p commands --all-targets --no-deps -- -D warnings`
## Gaps and hazards for leader integration
- Prompt SLA event naming is partially implicit: `send_prompt` emits `WorkerEventKind::Running`; it does not expose separate `prompt.sent`, `prompt.accepted`, `prompt.acceptance_delayed`, or `prompt.acceptance_timeout` event names. The current equivalent evidence is `prompt_in_flight`, `Running`, `observe_completion`, and startup-timeout classification.
- `StartupFailureClassification::PromptAcceptanceTimeout` is covered in `worker_boot` tests; full terminal/transport integration should still be verified by the leader or worker-3 if a real pane watcher exists outside the in-memory registry.
- Default trusted roots are parsed and merged into `WorkerCreate`, but unset config currently means no default roots. Worker-2 owns any change to default root selection.
- Session control protects workspace fingerprints at load/fork time; worker-4 owns CLI/doctor/preflight JSON contract changes.
- Full-workspace clippy currently has known unrelated runtime findings observed during task 1 verification; do not block this docs-only map on those unless leader re-scopes cleanup.
## Recommended safe integration order
1. Integrate worker boot / prompt SLA changes first and run `cargo test -p runtime worker_boot -- --nocapture` plus `cargo test -p tools worker_ -- --nocapture`.
2. Integrate trust-root changes and rerun trust/config tests plus the worker create config merge test.
3. Integrate startup-no-evidence classifier changes and rerun `cargo test -p runtime worker_boot -- --nocapture`.
4. Integrate session control / preflight / doctor JSON changes and rerun session-control, commands JSON, and preflight tests.
5. Run final formatting, targeted cargo check/clippy, then broader workspace tests with known full-workspace failures documented separately.

View File

@@ -0,0 +1,67 @@
# G004 event and report contract guidance
Captured: 2026-05-14 during the Stream 2 `G004-events-reports` team run.
Purpose: keep the user/developer-facing contract guidance for ROADMAP Phase 2 in one tracked source that points back to the code and roadmap anchors. This document is intentionally not the implementation map for task 5; it describes the interoperability contract consumers should rely on as the lane-event, report-schema, approval-token, and capability-negotiation lanes land.
## Source-of-truth anchors
| Contract family | Roadmap anchor | Current implementation / owner-facing anchor | Consumer guidance |
| --- | --- | --- | --- |
| Canonical lane events | `ROADMAP.md` Phase 2 §4, §4.5, §4.6, §4.7 | `rust/crates/runtime/src/lane_events.rs` (`LaneEventName`, `LaneEventStatus`, `LaneEventMetadata`, terminal reconciliation helpers) | Consume `event`, `status`, `emittedAt`, and `metadata` fields as the canonical state stream; do not infer lane state from terminal text when a structured event is present. |
| Report schema v1 and projections | `ROADMAP.md` §4.25-§4.34 | Stream 2 report-schema lane / fixtures as they land | Treat a report as a versioned canonical payload plus derived projections. A projection may omit or transform fields only with explicit provenance: compatibility downgrade, redaction policy, truncation, or source absence. |
| Policy-blocked handoff and approval-token chain | `ROADMAP.md` §4.37-§4.39 | Stream 2 approval-token lane as it lands | Treat policy blocks and owner approvals as typed artifacts, not prose. Execute an exception only when the approval token matches actor, policy, action, repo/branch/commit scope, expiry, and one-time-use state. |
| Capability negotiation | `ROADMAP.md` §4.25, §4.26, §4.32, §4.34 | Report-schema/projection fixtures and consumer conformance cases as they land | Consumers must advertise supported schema versions, optional field families, projection views, redaction semantics, and downgrade handling before relying on reduced payloads. |
## Lane event contract
The lane-event stream is the first machine-trustworthy surface for Stream 2. Consumers should expect these invariants when reading `LaneEvent` payloads:
- `event` is a typed event name, currently including the core lane lifecycle (`lane.started`, `lane.ready`, `lane.blocked`, `lane.red`, `lane.green`, `lane.finished`, `lane.failed`), branch health (`branch.stale_against_main`, `branch.workspace_mismatch`), reconciliation (`lane.reconciled`, `lane.superseded`, `lane.closed`), and ship provenance (`ship.prepared`, `ship.commits_selected`, `ship.merged`, `ship.pushed_main`).
- `status` is the normalized state for the event; consumers should prefer it over freeform `detail` text for automation.
- `metadata.seq`, `metadata.timestamp_ms`, and terminal fingerprints are the ordering/deduplication hooks. Consumers should use terminal reconciliation output rather than double-reporting contradictory terminal bursts.
- `metadata.provenance`, `metadata.environment_label`, `metadata.emitter_identity`, and `metadata.confidence_level` tell consumers whether an event is live lane truth, test traffic, healthcheck/replay output, or transport-layer evidence.
- `metadata.session_identity` and `metadata.ownership` bind a lane event to the session, workspace, workflow scope, owner, and watcher action. A watcher should not act on events whose ownership says `observe` or `ignore`.
Minimal consumer rule: if a structured event exists, pane text is supporting evidence only. Pane scraping must not override a higher-confidence typed event with matching session/workflow ownership.
## Report schema v1 contract
A Stream 2 report should be treated as a canonical fact record with optional projections. Consumers should preserve these semantics even when they receive only a downgraded view:
- Every report payload declares a schema version and a stable report identity/content hash for the full-fidelity canonical payload.
- Assertions are labeled as `fact`, `hypothesis`, or another declared evidence class, with confidence and source references. Negative evidence is first-class: `not observed`, `checked and absent`, and `redacted` are distinct states.
- Field deltas name the field, previous value/state, new value/state, attribution, and whether the delta came from source content, projection, downgrade, or redaction policy.
- Projections carry lineage back to the canonical report id/content hash and name the projection view, capability set, schema version, redaction policy, and deterministic rendering inputs.
- Redaction provenance is explicit. A missing field without a redaction/downgrade/source-absence reason is not enough evidence for an automated consumer to conclude the underlying fact is absent.
Minimal consumer rule: store the canonical identity and projection metadata together. Do not compare two projections as state changes unless their canonical content hash or declared projection inputs differ.
## Approval-token and policy-blocked contract
Policy-blocked actions and owner-approved exceptions belong in the same structured event/report family:
- A policy block names the typed reason, policy source, actor scope, blocked action, and safe fallback path.
- An approval token names the approving actor, policy exception, action, repository/worktree/branch/commit scope, expiry, and allowed use count.
- Token consumption records the exact action and scope that spent the token. Replays, scope expansion, expired tokens, and revoked tokens should surface typed policy errors.
- Delegation traceability stays attached when another worker/lane executes the approved action; the executor must be able to prove which approval artifact authorized the exception.
Minimal consumer rule: prose such as "approved" is not an executable approval. Require the structured token and verify that it is unconsumed and scoped to the exact action before proceeding.
## Capability negotiation and conformance
Mixed-version consumers are expected during Stream 2 rollout. Producers and consumers should negotiate instead of silently dropping fields:
- Consumers advertise supported report schema versions, field families, projection views, redaction states, downgrade semantics, and fixture/conformance suite version.
- Producers preserve one canonical full-fidelity report and emit downgraded projections only with `downgraded_for_compatibility` metadata.
- Deterministic projection inputs include schema version, consumer capability set, projection policy version, redaction policy version, and canonical content hash.
- Consumer conformance should distinguish syntax acceptance from semantic correctness, especially for `redacted` vs `missing`, stale vs current projections, negative evidence, and approval-token replay states.
Minimal consumer rule: an older consumer may accept a downgraded projection, but it must surface the downgrade as a capability limitation rather than treating omitted fields as canonical absence.
## Documentation maintenance rules
- Keep ROADMAP Phase 2 as the product requirement source and this file as the contract-reading guide.
- Keep Rust type names and event names aligned with `rust/crates/runtime/src/lane_events.rs`; update this document in the same change when public event names or metadata semantics change.
- Keep report-schema examples/fixtures aligned with this guide once the schema lane lands; fixture updates should explain intentional schema or projection changes.
- Do not mutate `.omx/ultragoal` from worker lanes. Leader-owned Ultragoal checkpointing consumes commits and verification evidence from task results.

View File

@@ -0,0 +1,57 @@
# G004 events/reports verification map
Scope source: OMX team `g004-events-reports-u-e61d2271`, worker-1 tasks 1, 2, 4, 5. Workers must not mutate `.omx/ultragoal`; leader owns aggregate checkpoints.
## Ownership boundaries
- **Lane events / event identity / terminal reconciliation** — `rust/crates/runtime/src/lane_events.rs`, exported through `rust/crates/runtime/src/lib.rs`; tool-manifest consumers in `rust/crates/tools/src/lib.rs` write `LaneEvent` vectors.
- **Report schema v1 / projection / redaction / capability negotiation** — `rust/crates/runtime/src/report_schema.rs`, exported through `rust/crates/runtime/src/lib.rs`; fixture note at `rust/crates/runtime/tests/fixtures/report_schema_v1/README.md`.
- **Approval-token chain** — ROADMAP §§4.38-4.40; owned by worker-2 for this team split. Worker-1 did not edit it.
- **Pinpoint closure batch** — runtime hygiene across compact/search-parser/policy/sandbox/integration-test surfaces: `rust/crates/runtime/src/compact.rs`, `rust/crates/runtime/src/file_ops.rs`, `rust/crates/runtime/src/policy_engine.rs`, `rust/crates/runtime/src/sandbox.rs`, `rust/crates/runtime/tests/integration_tests.rs`.
- **Regression harness / docs alignment** — worker-3/worker-4 lanes per leader split. Coordinate before editing shared docs/tests.
## Relevant symbols and files
- `LaneEventName`, `LaneEventStatus`, `LaneEventMetadata`, `LaneEventBuilder`, `compute_event_fingerprint`, `dedupe_terminal_events`, `reconcile_terminal_events` in `runtime/src/lane_events.rs`.
- `CanonicalReportV1`, `ReportClaim`, `NegativeEvidence`, `FieldDelta`, `ConsumerCapabilities`, `ReportProjectionV1`, `canonicalize_report`, `project_report`, `report_schema_v1_registry` in `runtime/src/report_schema.rs`.
- `AgentOutput.lane_events`, `persist_agent_terminal_state`, `write_agent_manifest`, `maybe_commit_provenance` in `tools/src/lib.rs`.
- Search/parser closure helpers: `summarize_messages` in `compact.rs`, `grep_search_impl` / `build_grep_content_output` in `file_ops.rs`.
## Completed worker-1 commits
- `f45f05e` / task 1 auto-checkpoint — terminal event fingerprints use stable SHA-256-derived canonical JSON, and production convenience terminal events attach/refresh fingerprints after payload changes.
- `3989fc0` — report schema v1 contract, deterministic projection/redaction provenance, capability negotiation, and fixture note.
- `7fff4c4` / task 4 auto-checkpoint — strict runtime clippy closure batch across compact/file_ops/policy/sandbox/integration tests.
## Current verification evidence
Run from `rust/` unless noted:
- `cargo test -p runtime lane_events -- --nocapture` — PASS, 46 lane-event tests.
- `cargo test -p runtime report_schema -- --nocapture` — PASS, 4 report-schema tests.
- `cargo check -p runtime` — PASS.
- `cargo clippy -p runtime --all-targets -- -D warnings` — PASS after task 4 closure batch.
- `cargo test -p runtime -- --nocapture` — PASS, 531 unit tests, 12 integration tests, doc-tests pass.
- `cargo test -p tools lane_event_schema_serializes_to_canonical_names -- --nocapture` — PASS, 1 targeted tools contract test.
## Leader integration verification plan
1. Inspect worker commits: `git log --oneline --decorate --max-count=8`.
2. Re-run focused contracts:
- `cd rust && cargo test -p runtime lane_events -- --nocapture`
- `cd rust && cargo test -p runtime report_schema -- --nocapture`
- `cd rust && cargo test -p tools lane_event_schema_serializes_to_canonical_names -- --nocapture`
3. Re-run runtime quality gate:
- `cd rust && cargo check -p runtime`
- `cd rust && cargo clippy -p runtime --all-targets -- -D warnings`
- `cd rust && cargo test -p runtime -- --nocapture`
4. If merging with worker-2 approval-token work, additionally run the worker-2 focused approval-token tests and check for export conflicts in `runtime/src/lib.rs`.
5. If merging with worker-3/4 docs or harness work, re-run their named regression harnesses plus `git diff --check`.
## Integration hazards
- `runtime/src/lib.rs` export blocks are shared; resolve conflicts by keeping both lane-event and report-schema exports sorted enough to remain readable.
- `tools/src/lib.rs` serializes lane events into agent manifests; terminal fingerprint changes intentionally affect `metadata.event_fingerprint` for finished/failed/superseded/merged/closed events with payloads.
- `report_schema.rs` currently defines the reusable contract and in-code deterministic fixtures; it does not yet wire report emission into CLI/status surfaces.
- ROADMAP approval-token §§4.38-4.40 remain a separate lane; do not treat worker-1 report schema as an approval artifact.
- Full workspace checks may include unrelated slow/provider-dependent tests; the verified local gate for this stream is runtime + targeted tools tests above.

View File

@@ -0,0 +1,42 @@
# Claw Code 2.0 PR and Issue Resolution Gate
This gate was added to the Claw Code 2.0 Ultragoal after the explicit requirement:
> all PRs should be merged and all issues should be resolved if resolvable and correct.
## Scope
Before the Claw Code 2.0 Ultragoal can be marked complete:
1. Every open GitHub PR at the current final-gate snapshot must be triaged.
2. PRs that are correct, compatible with Claw Code 2.0 direction, and pass required verification must be merged.
3. PRs that are stale, incorrect, duplicative, unsafe, spam, or outside Claw Code scope must not be merged; each needs a recorded rationale.
4. Every open GitHub issue at the current final-gate snapshot must be triaged.
5. Issues that are resolvable and correct must be fixed or explicitly linked to a merged fix.
6. Issues that are spam, duplicates, incorrect, unactionable, externally blocked, or not Claw Code work must be closed or labeled/commented with rationale when repository policy allows.
7. The final completion audit must use a fresh GitHub snapshot, not only the planning snapshot.
## Current live snapshot
A live snapshot was captured locally during G002 execution:
- PR snapshot: `.omx/research/github-live/open-prs.json`
- Issue snapshot: `.omx/research/github-live/open-issues.json`
- Captured on: 2026-05-14 during the active Ultragoal run.
- Observed counts: 50 open PR records and 1000 open issue records from GitHub CLI list calls.
These local `.omx/research/github-live/*` files are evidence inputs, not final proof. The final gate must refresh them and compare deltas.
## Required final evidence
The final report must include:
- Fresh `gh pr list --state open` and `gh issue list --state open` snapshots.
- A PR ledger with one row per PR: merge / reject / defer, reason, verification, commit/merge reference.
- An issue ledger with one row per issue: fixed / duplicate / spam / invalid / deferred-with-rationale / externally-blocked, reason, and linked evidence.
- Verification that no correct, mergeable PR remains unmerged without rationale.
- Verification that no resolvable, correct issue remains open without a fix or rationale.
## Non-goals
This gate does not require merging unsafe, unverified, incompatible, spam, or incorrect contributions. It requires explicit evidence-backed triage and action for everything that is correct and resolvable.

58
docs/roadmap-pr-goals.md Normal file
View File

@@ -0,0 +1,58 @@
# Roadmap PR goal intake
Captured: 2026-05-14 (Asia/Seoul) during the Claw Code 2.0 Ultragoal run.
Purpose: make the user's follow-up requirement durable: all roadmap PRs should be merged when correct/resolvable, and unresolved roadmap deltas should become Ultragoal work rather than being lost. This file is a tracked companion to the leader-owned `.omx/ultragoal/goals.json` and `.omx/ultragoal/ledger.jsonl` artifacts.
## Merge policy
- Merge only PRs that are still relevant to Claw Code 2.0, are non-draft, target `main`, and are conflict-free after a fresh mergeability refresh.
- Prefer squash merges with a Lore-style body when GitHub allows a direct PR merge.
- If a PR is documentation-only but adds a real roadmap gap, merging it is acceptable once checks/conflicts are clean.
- If a PR is stale, duplicated by already-landed work, or not product-aligned, do not force-merge; record the rationale and map any still-correct requirement into G011/G012.
- After merging roadmap PRs, refresh generated board artifacts (`.omx/cc2/board.json`, `.omx/cc2/board.md`) so Stream 0 coverage stays current.
## Open roadmap PRs with green historical checks
These are first-pass merge candidates, pending fresh mergeability and conflict checks against current `main`.
| PR | Title | Branch | Checks | Mergeable | URL |
| --- | --- | --- | --- | --- | --- |
| #2848 | docs(roadmap): add #333 — no in-session settings inspect command | `docs/roadmap-333-no-settings-inspect-command` -> `main` | 4/4 checks successful | UNKNOWN | https://github.com/ultraworkers/claw-code/pull/2848 |
| #2846 | docs(roadmap): add #331 — export silently overwrites on repeated invocations | `docs/roadmap-331-export-filename-collision` -> `main` | 4/4 checks successful | UNKNOWN | https://github.com/ultraworkers/claw-code/pull/2846 |
| #2869 | docs(roadmap): add #358 — history entries missing role field, no pagination | `docs/roadmap-348-history-entries-missing-role` -> `main` | 4/4 checks successful | UNKNOWN | https://github.com/ultraworkers/claw-code/pull/2869 |
| #2850 | docs(roadmap): add #335 — session list omits created_at_ms field | `docs/roadmap-335-session-list-no-created-at` -> `main` | 4/4 checks successful | UNKNOWN | https://github.com/ultraworkers/claw-code/pull/2850 |
| #2868 | docs(roadmap): add #356 — session list title always null; no rename command | `docs/roadmap-347-session-list-title-always-null` -> `main` | 4/4 checks successful | UNKNOWN | https://github.com/ultraworkers/claw-code/pull/2868 |
| #2865 | docs(roadmap): add #362 — doctor auth false-positive: misses CLI session tokens | `docs/roadmap-345-doctor-auth-check-incomplete` -> `main` | 4/4 checks successful | UNKNOWN | https://github.com/ultraworkers/claw-code/pull/2865 |
| #2864 | docs(roadmap): add #364 — /cost returns no cost_usd; identical to /stats | `docs/roadmap-344-cost-command-no-dollar-amount` -> `main` | 4/4 checks successful | UNKNOWN | https://github.com/ultraworkers/claw-code/pull/2864 |
| #2867 | docs(roadmap): add #368 — export always appends .txt; response.file reflects mangled path | `docs/roadmap-346-export-forces-txt-extension` -> `main` | 4/4 checks successful | UNKNOWN | https://github.com/ultraworkers/claw-code/pull/2867 |
| #2862 | docs(roadmap): add #342 — status json omits active session ID, workspace counters ambiguous | `docs/roadmap-342-v2` -> `main` | 4/4 checks successful | UNKNOWN | https://github.com/ultraworkers/claw-code/pull/2862 |
| #2876 | docs(roadmap): add #354 — /cwd suggests itself in did-you-mean; self-referential loop | `docs/roadmap-354-cwd-self-referential-suggestion` -> `main` | 4/4 checks successful | UNKNOWN | https://github.com/ultraworkers/claw-code/pull/2876 |
| #2872 | docs(roadmap): add #360 — /tokens, /stats, /cost identical output; no context-window or cost_usd | `docs/roadmap-349-tokens-stats-cost-identical` -> `main` | 4/4 checks successful | UNKNOWN | https://github.com/ultraworkers/claw-code/pull/2872 |
## Open roadmap PRs needing local validation or CI refresh
These have no check rollup in the live snapshot; validate locally or refresh CI before merging.
| PR | Title | Branch | Checks | Mergeable | URL |
| --- | --- | --- | --- | --- | --- |
| #2858 | docs(roadmap): add #343 — session subcommand resume-safety inconsistently enforced | `docs/roadmap-340-session-resume-safe-inconsistent` -> `main` | no checks reported | UNKNOWN | https://github.com/ultraworkers/claw-code/pull/2858 |
| #2839 | docs(roadmap): add #330 — resume mode stats/cost always zero | `docs/roadmap-324-resume-stats-zero` -> `main` | no checks reported | UNKNOWN | https://github.com/ultraworkers/claw-code/pull/2839 |
| #2841 | docs(roadmap): add #332 — doctor json missing top-level status field | `docs/roadmap-325-doctor-no-status-field` -> `main` | no checks reported | UNKNOWN | https://github.com/ultraworkers/claw-code/pull/2841 |
| #2844 | docs(roadmap): add #336 — session subcommand resume inconsistency and type/kind error mismatch | `docs/roadmap-329-session-subcommand-resume-inconsistency` -> `main` | no checks reported | UNKNOWN | https://github.com/ultraworkers/claw-code/pull/2844 |
| #2842 | docs(roadmap): add #334 — version json omits build_date and uses short sha only | `docs/roadmap-328-version-json-incomplete` -> `main` | no checks reported | UNKNOWN | https://github.com/ultraworkers/claw-code/pull/2842 |
## Product-fit review before merge
These may be broader than the Claw Code 2.0 roadmap scope and need a product-fit decision before merge.
| PR | Title | Branch | Checks | Mergeable | URL |
| --- | --- | --- | --- | --- | --- |
| #2824 | docs: personal assistant roadmap | `pr/docs-personal-assistant-roadmap` -> `main` | no checks reported | UNKNOWN | https://github.com/ultraworkers/claw-code/pull/2824 |
## Ultragoal mapping
- G003-G010: close implementation gaps that overlap a roadmap PR title if the requirement belongs to the active stream.
- G011: reconcile ecosystem/ops/UX roadmap PRs and unresolved correct issues that do not fit earlier streams.
- G012: final release gate must prove that every open roadmap PR was merged, closed as duplicate/obsolete, or converted into an explicit remaining goal with evidence.

View File

@@ -21,11 +21,12 @@ pub use prompt_cache::{
pub use providers::anthropic::{AnthropicClient, AnthropicClient as ApiClient, AuthSource}; pub use providers::anthropic::{AnthropicClient, AnthropicClient as ApiClient, AuthSource};
pub use providers::openai_compat::{ pub use providers::openai_compat::{
build_chat_completion_request, flatten_tool_result_content, is_reasoning_model, build_chat_completion_request, flatten_tool_result_content, is_reasoning_model,
model_rejects_is_error_field, translate_message, OpenAiCompatClient, OpenAiCompatConfig, model_rejects_is_error_field, model_requires_reasoning_content_in_history, translate_message,
OpenAiCompatClient, OpenAiCompatConfig,
}; };
pub use providers::{ pub use providers::{
detect_provider_kind, max_tokens_for_model, max_tokens_for_model_with_override, detect_provider_kind, max_tokens_for_model, max_tokens_for_model_with_override,
resolve_model_alias, ProviderKind, model_family_identity_for, model_family_identity_for_kind, resolve_model_alias, ProviderKind,
}; };
pub use sse::{parse_frame, SseParser}; pub use sse::{parse_frame, SseParser};
pub use types::{ pub use types::{

View File

@@ -250,6 +250,19 @@ pub fn detect_provider_kind(model: &str) -> ProviderKind {
ProviderKind::Anthropic ProviderKind::Anthropic
} }
#[must_use]
pub const fn model_family_identity_for_kind(kind: ProviderKind) -> runtime::ModelFamilyIdentity {
match kind {
ProviderKind::Anthropic => runtime::ModelFamilyIdentity::Claude,
ProviderKind::Xai | ProviderKind::OpenAi => runtime::ModelFamilyIdentity::Generic,
}
}
#[must_use]
pub fn model_family_identity_for(model: &str) -> runtime::ModelFamilyIdentity {
model_family_identity_for_kind(detect_provider_kind(model))
}
#[must_use] #[must_use]
pub fn max_tokens_for_model(model: &str) -> u32 { pub fn max_tokens_for_model(model: &str) -> u32 {
let canonical = resolve_model_alias(model); let canonical = resolve_model_alias(model);
@@ -484,8 +497,8 @@ mod tests {
use super::{ use super::{
anthropic_missing_credentials, anthropic_missing_credentials_hint, detect_provider_kind, anthropic_missing_credentials, anthropic_missing_credentials_hint, detect_provider_kind,
load_dotenv_file, max_tokens_for_model, max_tokens_for_model_with_override, load_dotenv_file, max_tokens_for_model, max_tokens_for_model_with_override,
model_token_limit, parse_dotenv, preflight_message_request, resolve_model_alias, model_family_identity_for, model_family_identity_for_kind, model_token_limit, parse_dotenv,
ProviderKind, preflight_message_request, resolve_model_alias, ProviderKind,
}; };
/// Serializes every test in this module that mutates process-wide /// Serializes every test in this module that mutates process-wide
@@ -544,6 +557,42 @@ mod tests {
); );
} }
#[test]
fn maps_provider_kind_to_model_family_identity() {
// given: each supported provider kind
let anthropic = ProviderKind::Anthropic;
let openai = ProviderKind::OpenAi;
let xai = ProviderKind::Xai;
// when: converting provider kinds to prompt model family identities
let anthropic_identity = model_family_identity_for_kind(anthropic);
let openai_identity = model_family_identity_for_kind(openai);
let xai_identity = model_family_identity_for_kind(xai);
// then: Anthropic stays Claude and OpenAI-compatible providers are generic
assert_eq!(anthropic_identity, runtime::ModelFamilyIdentity::Claude);
assert_eq!(openai_identity, runtime::ModelFamilyIdentity::Generic);
assert_eq!(xai_identity, runtime::ModelFamilyIdentity::Generic);
}
#[test]
fn maps_model_name_to_model_family_identity() {
// given: Anthropic, OpenAI-compatible, and xAI model names
let claude_model = "claude-opus-4-6";
let openai_model = "openai/gpt-4.1-mini";
let xai_model = "grok-3";
// when: detecting prompt model family identities from model names
let claude_identity = model_family_identity_for(claude_model);
let openai_identity = model_family_identity_for(openai_model);
let xai_identity = model_family_identity_for(xai_model);
// then: Anthropic stays Claude and OpenAI-compatible providers are generic
assert_eq!(claude_identity, runtime::ModelFamilyIdentity::Claude);
assert_eq!(openai_identity, runtime::ModelFamilyIdentity::Generic);
assert_eq!(xai_identity, runtime::ModelFamilyIdentity::Generic);
}
#[test] #[test]
fn openai_namespaced_model_routes_to_openai_not_anthropic() { fn openai_namespaced_model_routes_to_openai_not_anthropic() {
// Regression: "openai/gpt-4.1-mini" was misrouted to Anthropic when // Regression: "openai/gpt-4.1-mini" was misrouted to Anthropic when

View File

@@ -443,6 +443,8 @@ struct StreamState {
stop_reason: Option<String>, stop_reason: Option<String>,
usage: Option<Usage>, usage: Option<Usage>,
tool_calls: BTreeMap<u32, ToolCallState>, tool_calls: BTreeMap<u32, ToolCallState>,
thinking_started: bool,
thinking_finished: bool,
} }
impl StreamState { impl StreamState {
@@ -456,6 +458,8 @@ impl StreamState {
stop_reason: None, stop_reason: None,
usage: None, usage: None,
tool_calls: BTreeMap::new(), tool_calls: BTreeMap::new(),
thinking_started: false,
thinking_finished: false,
} }
} }
@@ -493,35 +497,61 @@ impl StreamState {
} }
for choice in chunk.choices { for choice in chunk.choices {
if let Some(reasoning) = choice
.delta
.reasoning_content
.filter(|value| !value.is_empty())
{
if !self.thinking_started {
self.thinking_started = true;
events.push(StreamEvent::ContentBlockStart(ContentBlockStartEvent {
index: 0,
content_block: OutputContentBlock::Thinking {
thinking: String::new(),
signature: None,
},
}));
}
events.push(StreamEvent::ContentBlockDelta(ContentBlockDeltaEvent {
index: 0,
delta: ContentBlockDelta::ThinkingDelta {
thinking: reasoning,
},
}));
}
if let Some(content) = choice.delta.content.filter(|value| !value.is_empty()) { if let Some(content) = choice.delta.content.filter(|value| !value.is_empty()) {
self.close_thinking(&mut events);
if !self.text_started { if !self.text_started {
self.text_started = true; self.text_started = true;
events.push(StreamEvent::ContentBlockStart(ContentBlockStartEvent { events.push(StreamEvent::ContentBlockStart(ContentBlockStartEvent {
index: 0, index: self.text_block_index(),
content_block: OutputContentBlock::Text { content_block: OutputContentBlock::Text {
text: String::new(), text: String::new(),
}, },
})); }));
} }
events.push(StreamEvent::ContentBlockDelta(ContentBlockDeltaEvent { events.push(StreamEvent::ContentBlockDelta(ContentBlockDeltaEvent {
index: 0, index: self.text_block_index(),
delta: ContentBlockDelta::TextDelta { text: content }, delta: ContentBlockDelta::TextDelta { text: content },
})); }));
} }
for tool_call in choice.delta.tool_calls { for tool_call in choice.delta.tool_calls {
self.close_thinking(&mut events);
let tool_index_offset = self.tool_index_offset();
let state = self.tool_calls.entry(tool_call.index).or_default(); let state = self.tool_calls.entry(tool_call.index).or_default();
state.apply(tool_call); state.apply(tool_call);
let block_index = state.block_index(); let block_index = state.block_index(tool_index_offset);
if !state.started { if !state.started {
if let Some(start_event) = state.start_event()? { if let Some(start_event) = state.start_event(tool_index_offset)? {
state.started = true; state.started = true;
events.push(StreamEvent::ContentBlockStart(start_event)); events.push(StreamEvent::ContentBlockStart(start_event));
} else { } else {
continue; continue;
} }
} }
if let Some(delta_event) = state.delta_event() { if let Some(delta_event) = state.delta_event(tool_index_offset) {
events.push(StreamEvent::ContentBlockDelta(delta_event)); events.push(StreamEvent::ContentBlockDelta(delta_event));
} }
if choice.finish_reason.as_deref() == Some("tool_calls") && !state.stopped { if choice.finish_reason.as_deref() == Some("tool_calls") && !state.stopped {
@@ -535,11 +565,12 @@ impl StreamState {
if let Some(finish_reason) = choice.finish_reason { if let Some(finish_reason) = choice.finish_reason {
self.stop_reason = Some(normalize_finish_reason(&finish_reason)); self.stop_reason = Some(normalize_finish_reason(&finish_reason));
if finish_reason == "tool_calls" { if finish_reason == "tool_calls" {
let tool_index_offset = self.tool_index_offset();
for state in self.tool_calls.values_mut() { for state in self.tool_calls.values_mut() {
if state.started && !state.stopped { if state.started && !state.stopped {
state.stopped = true; state.stopped = true;
events.push(StreamEvent::ContentBlockStop(ContentBlockStopEvent { events.push(StreamEvent::ContentBlockStop(ContentBlockStopEvent {
index: state.block_index(), index: state.block_index(tool_index_offset),
})); }));
} }
} }
@@ -557,19 +588,21 @@ impl StreamState {
self.finished = true; self.finished = true;
let mut events = Vec::new(); let mut events = Vec::new();
self.close_thinking(&mut events);
if self.text_started && !self.text_finished { if self.text_started && !self.text_finished {
self.text_finished = true; self.text_finished = true;
events.push(StreamEvent::ContentBlockStop(ContentBlockStopEvent { events.push(StreamEvent::ContentBlockStop(ContentBlockStopEvent {
index: 0, index: self.text_block_index(),
})); }));
} }
let tool_index_offset = self.tool_index_offset();
for state in self.tool_calls.values_mut() { for state in self.tool_calls.values_mut() {
if !state.started { if !state.started {
if let Some(start_event) = state.start_event()? { if let Some(start_event) = state.start_event(tool_index_offset)? {
state.started = true; state.started = true;
events.push(StreamEvent::ContentBlockStart(start_event)); events.push(StreamEvent::ContentBlockStart(start_event));
if let Some(delta_event) = state.delta_event() { if let Some(delta_event) = state.delta_event(tool_index_offset) {
events.push(StreamEvent::ContentBlockDelta(delta_event)); events.push(StreamEvent::ContentBlockDelta(delta_event));
} }
} }
@@ -577,7 +610,7 @@ impl StreamState {
if state.started && !state.stopped { if state.started && !state.stopped {
state.stopped = true; state.stopped = true;
events.push(StreamEvent::ContentBlockStop(ContentBlockStopEvent { events.push(StreamEvent::ContentBlockStop(ContentBlockStopEvent {
index: state.block_index(), index: state.block_index(tool_index_offset),
})); }));
} }
} }
@@ -603,6 +636,31 @@ impl StreamState {
} }
Ok(events) Ok(events)
} }
fn close_thinking(&mut self, events: &mut Vec<StreamEvent>) {
if self.thinking_started && !self.thinking_finished {
self.thinking_finished = true;
events.push(StreamEvent::ContentBlockStop(ContentBlockStopEvent {
index: 0,
}));
}
}
const fn text_block_index(&self) -> u32 {
if self.thinking_started {
1
} else {
0
}
}
const fn tool_index_offset(&self) -> u32 {
if self.thinking_started {
2
} else {
1
}
}
} }
#[derive(Debug, Default)] #[derive(Debug, Default)]
@@ -630,12 +688,12 @@ impl ToolCallState {
} }
} }
const fn block_index(&self) -> u32 { const fn block_index(&self, offset: u32) -> u32 {
self.openai_index + 1 self.openai_index + offset
} }
#[allow(clippy::unnecessary_wraps)] #[allow(clippy::unnecessary_wraps)]
fn start_event(&self) -> Result<Option<ContentBlockStartEvent>, ApiError> { fn start_event(&self, offset: u32) -> Result<Option<ContentBlockStartEvent>, ApiError> {
let Some(name) = self.name.clone() else { let Some(name) = self.name.clone() else {
return Ok(None); return Ok(None);
}; };
@@ -644,7 +702,7 @@ impl ToolCallState {
.clone() .clone()
.unwrap_or_else(|| format!("tool_call_{}", self.openai_index)); .unwrap_or_else(|| format!("tool_call_{}", self.openai_index));
Ok(Some(ContentBlockStartEvent { Ok(Some(ContentBlockStartEvent {
index: self.block_index(), index: self.block_index(offset),
content_block: OutputContentBlock::ToolUse { content_block: OutputContentBlock::ToolUse {
id, id,
name, name,
@@ -653,14 +711,14 @@ impl ToolCallState {
})) }))
} }
fn delta_event(&mut self) -> Option<ContentBlockDeltaEvent> { fn delta_event(&mut self, offset: u32) -> Option<ContentBlockDeltaEvent> {
if self.emitted_len >= self.arguments.len() { if self.emitted_len >= self.arguments.len() {
return None; return None;
} }
let delta = self.arguments[self.emitted_len..].to_string(); let delta = self.arguments[self.emitted_len..].to_string();
self.emitted_len = self.arguments.len(); self.emitted_len = self.arguments.len();
Some(ContentBlockDeltaEvent { Some(ContentBlockDeltaEvent {
index: self.block_index(), index: self.block_index(offset),
delta: ContentBlockDelta::InputJsonDelta { delta: ContentBlockDelta::InputJsonDelta {
partial_json: delta, partial_json: delta,
}, },
@@ -690,6 +748,8 @@ struct ChatMessage {
#[serde(default)] #[serde(default)]
content: Option<String>, content: Option<String>,
#[serde(default)] #[serde(default)]
reasoning_content: Option<String>,
#[serde(default)]
tool_calls: Vec<ResponseToolCall>, tool_calls: Vec<ResponseToolCall>,
} }
@@ -735,6 +795,8 @@ struct ChunkChoice {
struct ChunkDelta { struct ChunkDelta {
#[serde(default)] #[serde(default)]
content: Option<String>, content: Option<String>,
#[serde(default)]
reasoning_content: Option<String>,
#[serde(default, deserialize_with = "deserialize_null_as_empty_vec")] #[serde(default, deserialize_with = "deserialize_null_as_empty_vec")]
tool_calls: Vec<DeltaToolCall>, tool_calls: Vec<DeltaToolCall>,
} }
@@ -793,6 +855,15 @@ pub fn is_reasoning_model(model: &str) -> bool {
|| canonical.contains("thinking") || canonical.contains("thinking")
} }
/// Returns true for OpenAI-compatible DeepSeek V4 models that require prior
/// assistant reasoning to be echoed back as `reasoning_content` in history.
#[must_use]
pub fn model_requires_reasoning_content_in_history(model: &str) -> bool {
let lowered = model.to_ascii_lowercase();
let canonical = lowered.rsplit('/').next().unwrap_or(lowered.as_str());
canonical.starts_with("deepseek-v4")
}
/// Strip routing prefix (e.g., "openai/gpt-4" → "gpt-4") for the wire. /// Strip routing prefix (e.g., "openai/gpt-4" → "gpt-4") for the wire.
/// The prefix is used only to select transport; the backend expects the /// The prefix is used only to select transport; the backend expects the
/// bare model id. /// bare model id.
@@ -948,10 +1019,14 @@ pub fn translate_message(message: &InputMessage, model: &str) -> Vec<Value> {
match message.role.as_str() { match message.role.as_str() {
"assistant" => { "assistant" => {
let mut text = String::new(); let mut text = String::new();
let mut reasoning = String::new();
let mut tool_calls = Vec::new(); let mut tool_calls = Vec::new();
for block in &message.content { for block in &message.content {
match block { match block {
InputContentBlock::Text { text: value } => text.push_str(value), InputContentBlock::Text { text: value } => text.push_str(value),
InputContentBlock::Thinking {
thinking: value, ..
} => reasoning.push_str(value),
InputContentBlock::ToolUse { id, name, input } => tool_calls.push(json!({ InputContentBlock::ToolUse { id, name, input } => tool_calls.push(json!({
"id": id, "id": id,
"type": "function", "type": "function",
@@ -963,13 +1038,18 @@ pub fn translate_message(message: &InputMessage, model: &str) -> Vec<Value> {
InputContentBlock::ToolResult { .. } => {} InputContentBlock::ToolResult { .. } => {}
} }
} }
if text.is_empty() && tool_calls.is_empty() { let include_reasoning =
model_requires_reasoning_content_in_history(model) && !reasoning.is_empty();
if text.is_empty() && tool_calls.is_empty() && !include_reasoning {
Vec::new() Vec::new()
} else { } else {
let mut msg = serde_json::json!({ let mut msg = serde_json::json!({
"role": "assistant", "role": "assistant",
"content": (!text.is_empty()).then_some(text), "content": (!text.is_empty()).then_some(text),
}); });
if include_reasoning {
msg["reasoning_content"] = json!(reasoning);
}
// Only include tool_calls when non-empty: some providers reject // Only include tool_calls when non-empty: some providers reject
// assistant messages with an explicit empty tool_calls array. // assistant messages with an explicit empty tool_calls array.
if !tool_calls.is_empty() { if !tool_calls.is_empty() {
@@ -1003,6 +1083,7 @@ pub fn translate_message(message: &InputMessage, model: &str) -> Vec<Value> {
} }
Some(msg) Some(msg)
} }
InputContentBlock::Thinking { .. } => None,
InputContentBlock::ToolUse { .. } => None, InputContentBlock::ToolUse { .. } => None,
}) })
.collect(), .collect(),
@@ -1182,6 +1263,16 @@ fn normalize_response(
"chat completion response missing choices", "chat completion response missing choices",
))?; ))?;
let mut content = Vec::new(); let mut content = Vec::new();
if let Some(thinking) = choice
.message
.reasoning_content
.filter(|value| !value.is_empty())
{
content.push(OutputContentBlock::Thinking {
thinking,
signature: None,
});
}
if let Some(text) = choice.message.content.filter(|value| !value.is_empty()) { if let Some(text) = choice.message.content.filter(|value| !value.is_empty()) {
content.push(OutputContentBlock::Text { text }); content.push(OutputContentBlock::Text { text });
} }
@@ -1413,13 +1504,15 @@ impl StringExt for String {
mod tests { mod tests {
use super::{ use super::{
build_chat_completion_request, chat_completions_endpoint, is_reasoning_model, build_chat_completion_request, chat_completions_endpoint, is_reasoning_model,
normalize_finish_reason, openai_tool_choice, parse_tool_arguments, OpenAiCompatClient, model_requires_reasoning_content_in_history, normalize_finish_reason, normalize_response,
OpenAiCompatConfig, openai_tool_choice, parse_tool_arguments, OpenAiCompatClient, OpenAiCompatConfig,
StreamState,
}; };
use crate::error::ApiError; use crate::error::ApiError;
use crate::types::{ use crate::types::{
InputContentBlock, InputMessage, MessageRequest, ToolChoice, ToolDefinition, ContentBlockDelta, ContentBlockDeltaEvent, ContentBlockStartEvent, ContentBlockStopEvent,
ToolResultContentBlock, InputContentBlock, InputMessage, MessageRequest, OutputContentBlock, StreamEvent,
ToolChoice, ToolDefinition, ToolResultContentBlock,
}; };
use serde_json::json; use serde_json::json;
use std::sync::{Mutex, OnceLock}; use std::sync::{Mutex, OnceLock};
@@ -1465,6 +1558,188 @@ mod tests {
assert_eq!(payload["tool_choice"], json!("auto")); assert_eq!(payload["tool_choice"], json!("auto"));
} }
#[test]
fn model_requires_reasoning_content_in_history_detects_deepseek_v4_models() {
// Given DeepSeek V4 and non-V4 model names.
let positive = [
"deepseek-v4-flash",
"deepseek-v4-pro",
"openai/deepseek-v4-pro",
"deepseek/deepseek-v4-flash",
];
let negative = [
"deepseek-reasoner",
"deepseek-chat",
"gpt-4o",
"claude-sonnet-4-6",
];
// When checking whether history reasoning_content is required.
// Then only DeepSeek V4 variants require it.
for model in positive {
assert!(model_requires_reasoning_content_in_history(model));
}
for model in negative {
assert!(!model_requires_reasoning_content_in_history(model));
}
}
#[test]
fn legacy_deepseek_reasoner_request_omits_reasoning_content_for_assistant_history() {
// Given an assistant history turn containing thinking.
let request = assistant_history_with_thinking_request("deepseek-reasoner");
// When serializing for legacy deepseek-reasoner.
let payload = build_chat_completion_request(&request, OpenAiCompatConfig::openai());
// Then reasoning_content is omitted.
let assistant = &payload["messages"][0];
assert_eq!(assistant["role"], json!("assistant"));
assert!(assistant.get("reasoning_content").is_none());
}
#[test]
fn deepseek_v4_pro_request_includes_reasoning_content_for_assistant_history() {
// Given an assistant history turn containing thinking.
let request = assistant_history_with_thinking_request("openai/deepseek-v4-pro");
// When serializing for DeepSeek V4 Pro.
let payload = build_chat_completion_request(&request, OpenAiCompatConfig::openai());
// Then reasoning_content is included on the assistant message.
let assistant = &payload["messages"][0];
assert_eq!(assistant["reasoning_content"], json!("prior reasoning"));
assert_eq!(assistant["content"], json!("answer"));
}
#[test]
fn deepseek_v4_flash_request_includes_reasoning_content_for_assistant_history() {
// Given an assistant history turn containing thinking.
let request = assistant_history_with_thinking_request("deepseek-v4-flash");
// When serializing for DeepSeek V4 Flash.
let payload = build_chat_completion_request(&request, OpenAiCompatConfig::openai());
// Then reasoning_content is included on the assistant message.
let assistant = &payload["messages"][0];
assert_eq!(assistant["reasoning_content"], json!("prior reasoning"));
}
#[test]
fn non_streaming_response_with_reasoning_content_emits_thinking_block_first() {
// Given a non-streaming OpenAI-compatible response with reasoning_content.
let response = super::ChatCompletionResponse {
id: "chatcmpl_reasoning".to_string(),
model: "deepseek-v4-pro".to_string(),
choices: vec![super::ChatChoice {
message: super::ChatMessage {
role: "assistant".to_string(),
content: Some("final answer".to_string()),
reasoning_content: Some("hidden thought".to_string()),
tool_calls: Vec::new(),
},
finish_reason: Some("stop".to_string()),
}],
usage: None,
};
// When normalizing the provider response.
let normalized = normalize_response("deepseek-v4-pro", response).expect("normalized");
// Then Thinking is the first content block, before text.
assert_eq!(
normalized.content,
vec![
OutputContentBlock::Thinking {
thinking: "hidden thought".to_string(),
signature: None,
},
OutputContentBlock::Text {
text: "final answer".to_string(),
},
]
);
}
#[test]
fn streaming_chunks_with_reasoning_content_emit_thinking_block_events_before_text() {
// Given streaming chunks with reasoning_content followed by text.
let mut state = StreamState::new("deepseek-v4-pro".to_string());
let mut events = state
.ingest_chunk(super::ChatCompletionChunk {
id: "chatcmpl_stream_reasoning".to_string(),
model: Some("deepseek-v4-pro".to_string()),
choices: vec![super::ChunkChoice {
delta: super::ChunkDelta {
content: None,
reasoning_content: Some("think".to_string()),
tool_calls: Vec::new(),
},
finish_reason: None,
}],
usage: None,
})
.expect("reasoning chunk");
events.extend(
state
.ingest_chunk(super::ChatCompletionChunk {
id: "chatcmpl_stream_reasoning".to_string(),
model: None,
choices: vec![super::ChunkChoice {
delta: super::ChunkDelta {
content: Some(" answer".to_string()),
reasoning_content: None,
tool_calls: Vec::new(),
},
finish_reason: Some("stop".to_string()),
}],
usage: None,
})
.expect("text chunk"),
);
events.extend(state.finish().expect("finish"));
// When reading normalized stream events.
// Then Thinking starts at index 0, text is offset to index 1.
assert!(matches!(events[0], StreamEvent::MessageStart(_)));
assert!(matches!(
events[1],
StreamEvent::ContentBlockStart(ContentBlockStartEvent {
index: 0,
content_block: OutputContentBlock::Thinking { .. },
})
));
assert!(matches!(
events[2],
StreamEvent::ContentBlockDelta(ContentBlockDeltaEvent {
index: 0,
delta: ContentBlockDelta::ThinkingDelta { .. },
})
));
assert!(matches!(
events[3],
StreamEvent::ContentBlockStop(ContentBlockStopEvent { index: 0 })
));
assert!(matches!(
events[4],
StreamEvent::ContentBlockStart(ContentBlockStartEvent {
index: 1,
content_block: OutputContentBlock::Text { .. },
})
));
assert!(matches!(
events[5],
StreamEvent::ContentBlockDelta(ContentBlockDeltaEvent {
index: 1,
delta: ContentBlockDelta::TextDelta { .. },
})
));
assert!(matches!(
events[6],
StreamEvent::ContentBlockStop(ContentBlockStopEvent { index: 1 })
));
}
#[test] #[test]
fn tool_schema_object_gets_strict_fields_for_responses_endpoint() { fn tool_schema_object_gets_strict_fields_for_responses_endpoint() {
// OpenAI /responses endpoint rejects object schemas missing // OpenAI /responses endpoint rejects object schemas missing
@@ -1624,6 +1899,27 @@ mod tests {
); );
} }
fn assistant_history_with_thinking_request(model: &str) -> MessageRequest {
MessageRequest {
model: model.to_string(),
max_tokens: 100,
messages: vec![InputMessage {
role: "assistant".to_string(),
content: vec![
InputContentBlock::Thinking {
thinking: "prior reasoning".to_string(),
signature: None,
},
InputContentBlock::Text {
text: "answer".to_string(),
},
],
}],
stream: false,
..Default::default()
}
}
fn env_lock() -> std::sync::MutexGuard<'static, ()> { fn env_lock() -> std::sync::MutexGuard<'static, ()> {
static LOCK: OnceLock<Mutex<()>> = OnceLock::new(); static LOCK: OnceLock<Mutex<()>> = OnceLock::new();
LOCK.get_or_init(|| Mutex::new(())) LOCK.get_or_init(|| Mutex::new(()))

View File

@@ -81,6 +81,11 @@ pub enum InputContentBlock {
Text { Text {
text: String, text: String,
}, },
Thinking {
thinking: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
signature: Option<String>,
},
ToolUse { ToolUse {
id: String, id: String,
name: String, name: String,
@@ -268,8 +273,9 @@ pub enum StreamEvent {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use runtime::format_usd; use runtime::format_usd;
use serde_json::json;
use super::{MessageResponse, Usage}; use super::{InputContentBlock, MessageResponse, Usage};
#[test] #[test]
fn usage_total_tokens_includes_cache_tokens() { fn usage_total_tokens_includes_cache_tokens() {
@@ -307,4 +313,33 @@ mod tests {
assert_eq!(format_usd(cost.total_cost_usd()), "$54.6750"); assert_eq!(format_usd(cost.total_cost_usd()), "$54.6750");
assert_eq!(response.total_tokens(), 1_800_000); assert_eq!(response.total_tokens(), 1_800_000);
} }
#[test]
fn input_content_block_thinking_serializes_with_snake_case_type() {
// given
let block = InputContentBlock::Thinking {
thinking: "pondering".to_string(),
signature: Some("sig_123".to_string()),
};
// when
let serialized = serde_json::to_value(&block).unwrap();
let deserialized: InputContentBlock = serde_json::from_value(json!({
"type": "thinking",
"thinking": "pondering",
"signature": "sig_123"
}))
.unwrap();
// then
assert_eq!(
serialized,
json!({
"type": "thinking",
"thinking": "pondering",
"signature": "sig_123"
})
);
assert_eq!(deserialized, block);
}
} }

View File

@@ -63,6 +63,50 @@ async fn send_message_uses_openai_compatible_endpoint_and_auth() {
assert_eq!(body["tools"][0]["type"], json!("function")); assert_eq!(body["tools"][0]["type"], json!("function"));
} }
#[tokio::test]
async fn send_message_preserves_deepseek_reasoning_content_before_text() {
let state = Arc::new(Mutex::new(Vec::<CapturedRequest>::new()));
let body = concat!(
"{",
"\"id\":\"chatcmpl_deepseek_reasoning\",",
"\"model\":\"deepseek-v4-pro\",",
"\"choices\":[{",
"\"message\":{\"role\":\"assistant\",\"reasoning_content\":\"Think first\",\"content\":\"Answer second\",\"tool_calls\":[]},",
"\"finish_reason\":\"stop\"",
"}],",
"\"usage\":{\"prompt_tokens\":11,\"completion_tokens\":5}",
"}"
);
let server = spawn_server(
state.clone(),
vec![http_response("200 OK", "application/json", body)],
)
.await;
let client = OpenAiCompatClient::new("openai-test-key", OpenAiCompatConfig::openai())
.with_base_url(server.base_url());
let response = client
.send_message(&MessageRequest {
model: "openai/deepseek-v4-pro".to_string(),
..sample_request(false)
})
.await
.expect("request should succeed");
assert_eq!(
response.content,
vec![
OutputContentBlock::Thinking {
thinking: "Think first".to_string(),
signature: None,
},
OutputContentBlock::Text {
text: "Answer second".to_string(),
},
]
);
}
#[tokio::test] #[tokio::test]
async fn send_message_blocks_oversized_xai_requests_before_the_http_call() { async fn send_message_blocks_oversized_xai_requests_before_the_http_call() {
let state = Arc::new(Mutex::new(Vec::<CapturedRequest>::new())); let state = Arc::new(Mutex::new(Vec::<CapturedRequest>::new()));

View File

@@ -2490,6 +2490,13 @@ pub fn classify_skills_slash_command(args: Option<&str>) -> SkillSlashDispatch {
None | Some("list" | "help" | "-h" | "--help" | "show" | "info" | "describe") => { None | Some("list" | "help" | "-h" | "--help" | "show" | "info" | "describe") => {
SkillSlashDispatch::Local SkillSlashDispatch::Local
} }
Some(args)
if args
.split_whitespace()
.any(|part| matches!(part, "-h" | "--help")) =>
{
SkillSlashDispatch::Local
}
Some(args) if args == "install" || args.starts_with("install ") => { Some(args) if args == "install" || args.starts_with("install ") => {
SkillSlashDispatch::Local SkillSlashDispatch::Local
} }

View File

@@ -248,6 +248,7 @@ fn detect_scenario(request: &MessageRequest) -> Option<Scenario> {
.split_whitespace() .split_whitespace()
.find_map(|token| token.strip_prefix(SCENARIO_PREFIX)) .find_map(|token| token.strip_prefix(SCENARIO_PREFIX))
.and_then(Scenario::parse), .and_then(Scenario::parse),
InputContentBlock::Thinking { .. } => None,
_ => None, _ => None,
}) })
}) })

View File

@@ -0,0 +1,502 @@
use std::collections::BTreeMap;
/// Machine-readable policy exception scope that an approval token may override.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct ApprovalScope {
pub policy: String,
pub action: String,
pub repository: Option<String>,
pub branch: Option<String>,
}
impl ApprovalScope {
#[must_use]
pub fn new(policy: impl Into<String>, action: impl Into<String>) -> Self {
Self {
policy: policy.into(),
action: action.into(),
repository: None,
branch: None,
}
}
#[must_use]
pub fn with_repository(mut self, repository: impl Into<String>) -> Self {
self.repository = Some(repository.into());
self
}
#[must_use]
pub fn with_branch(mut self, branch: impl Into<String>) -> Self {
self.branch = Some(branch.into());
self
}
}
/// Actor/session hop recorded when an approval is delegated or consumed.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct ApprovalDelegationHop {
pub actor: String,
pub session_id: Option<String>,
pub reason: String,
}
impl ApprovalDelegationHop {
#[must_use]
pub fn new(actor: impl Into<String>, reason: impl Into<String>) -> Self {
Self {
actor: actor.into(),
session_id: None,
reason: reason.into(),
}
}
#[must_use]
pub fn with_session_id(mut self, session_id: impl Into<String>) -> Self {
self.session_id = Some(session_id.into());
self
}
}
/// Current lifecycle state for a policy-exception approval token.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ApprovalTokenStatus {
Pending,
Granted,
Consumed,
Expired,
Revoked,
}
impl ApprovalTokenStatus {
#[must_use]
pub fn as_str(self) -> &'static str {
match self {
Self::Pending => "approval_pending",
Self::Granted => "approval_granted",
Self::Consumed => "approval_consumed",
Self::Expired => "approval_expired",
Self::Revoked => "approval_revoked",
}
}
}
/// Typed policy errors returned when a token cannot authorize a blocked action.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum ApprovalTokenError {
NoApproval,
ApprovalPending,
ApprovalExpired,
ApprovalRevoked,
ApprovalAlreadyConsumed,
ScopeMismatch {
expected: Box<ApprovalScope>,
actual: Box<ApprovalScope>,
},
UnauthorizedDelegate {
expected: String,
actual: String,
},
}
impl ApprovalTokenError {
#[must_use]
pub fn as_str(&self) -> &'static str {
match self {
Self::NoApproval => "no_approval",
Self::ApprovalPending => "approval_pending",
Self::ApprovalExpired => "approval_expired",
Self::ApprovalRevoked => "approval_revoked",
Self::ApprovalAlreadyConsumed => "approval_already_consumed",
Self::ScopeMismatch { .. } => "approval_scope_mismatch",
Self::UnauthorizedDelegate { .. } => "approval_unauthorized_delegate",
}
}
}
/// Approval grant bound to a policy/action scope, approving owner, and executor.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct ApprovalTokenGrant {
pub token: String,
pub scope: ApprovalScope,
pub approving_actor: String,
pub approved_executor: String,
pub status: ApprovalTokenStatus,
pub expires_at_epoch_seconds: Option<u64>,
pub max_uses: u32,
pub uses: u32,
delegation_chain: Vec<ApprovalDelegationHop>,
}
impl ApprovalTokenGrant {
#[must_use]
pub fn pending(
token: impl Into<String>,
scope: ApprovalScope,
approving_actor: impl Into<String>,
approved_executor: impl Into<String>,
) -> Self {
Self {
token: token.into(),
scope,
approving_actor: approving_actor.into(),
approved_executor: approved_executor.into(),
status: ApprovalTokenStatus::Pending,
expires_at_epoch_seconds: None,
max_uses: 1,
uses: 0,
delegation_chain: Vec::new(),
}
}
#[must_use]
pub fn granted(
token: impl Into<String>,
scope: ApprovalScope,
approving_actor: impl Into<String>,
approved_executor: impl Into<String>,
) -> Self {
Self::pending(token, scope, approving_actor, approved_executor).approve()
}
#[must_use]
pub fn approve(mut self) -> Self {
self.status = ApprovalTokenStatus::Granted;
self
}
#[must_use]
pub fn expires_at(mut self, epoch_seconds: u64) -> Self {
self.expires_at_epoch_seconds = Some(epoch_seconds);
self
}
#[must_use]
pub fn with_max_uses(mut self, max_uses: u32) -> Self {
self.max_uses = max_uses.max(1);
self
}
#[must_use]
pub fn with_delegation_hop(mut self, hop: ApprovalDelegationHop) -> Self {
self.delegation_chain.push(hop);
self
}
#[must_use]
pub fn delegation_chain(&self) -> &[ApprovalDelegationHop] {
&self.delegation_chain
}
}
/// Auditable result of verifying or consuming an approval token.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct ApprovalTokenAudit {
pub token: String,
pub scope: ApprovalScope,
pub approving_actor: String,
pub executing_actor: String,
pub status: ApprovalTokenStatus,
pub delegated_execution: bool,
pub delegation_chain: Vec<ApprovalDelegationHop>,
pub uses: u32,
pub max_uses: u32,
}
/// In-memory approval-token ledger with one-time-use and replay protection.
#[derive(Debug, Clone, PartialEq, Eq, Default)]
pub struct ApprovalTokenLedger {
grants: BTreeMap<String, ApprovalTokenGrant>,
}
impl ApprovalTokenLedger {
#[must_use]
pub fn new() -> Self {
Self::default()
}
pub fn insert(&mut self, grant: ApprovalTokenGrant) {
self.grants.insert(grant.token.clone(), grant);
}
#[must_use]
pub fn get(&self, token: &str) -> Option<&ApprovalTokenGrant> {
self.grants.get(token)
}
pub fn revoke(&mut self, token: &str) -> Result<ApprovalTokenAudit, ApprovalTokenError> {
let grant = self
.grants
.get_mut(token)
.ok_or(ApprovalTokenError::NoApproval)?;
grant.status = ApprovalTokenStatus::Revoked;
Ok(Self::audit_for(grant, &grant.approved_executor))
}
pub fn verify(
&self,
token: &str,
scope: &ApprovalScope,
executing_actor: &str,
now_epoch_seconds: u64,
) -> Result<ApprovalTokenAudit, ApprovalTokenError> {
let grant = self
.grants
.get(token)
.ok_or(ApprovalTokenError::NoApproval)?;
Self::validate_grant(grant, scope, executing_actor, now_epoch_seconds)?;
Ok(Self::audit_for(grant, executing_actor))
}
pub fn consume(
&mut self,
token: &str,
scope: &ApprovalScope,
executing_actor: &str,
now_epoch_seconds: u64,
) -> Result<ApprovalTokenAudit, ApprovalTokenError> {
let grant = self
.grants
.get_mut(token)
.ok_or(ApprovalTokenError::NoApproval)?;
Self::validate_grant(grant, scope, executing_actor, now_epoch_seconds)?;
grant.uses += 1;
if grant.uses >= grant.max_uses {
grant.status = ApprovalTokenStatus::Consumed;
}
Ok(Self::audit_for(grant, executing_actor))
}
fn validate_grant(
grant: &ApprovalTokenGrant,
scope: &ApprovalScope,
executing_actor: &str,
now_epoch_seconds: u64,
) -> Result<(), ApprovalTokenError> {
match grant.status {
ApprovalTokenStatus::Pending => return Err(ApprovalTokenError::ApprovalPending),
ApprovalTokenStatus::Consumed => {
return Err(ApprovalTokenError::ApprovalAlreadyConsumed)
}
ApprovalTokenStatus::Expired => return Err(ApprovalTokenError::ApprovalExpired),
ApprovalTokenStatus::Revoked => return Err(ApprovalTokenError::ApprovalRevoked),
ApprovalTokenStatus::Granted => {}
}
if grant
.expires_at_epoch_seconds
.is_some_and(|expires_at| now_epoch_seconds > expires_at)
{
return Err(ApprovalTokenError::ApprovalExpired);
}
if grant.uses >= grant.max_uses {
return Err(ApprovalTokenError::ApprovalAlreadyConsumed);
}
if grant.scope != *scope {
return Err(ApprovalTokenError::ScopeMismatch {
expected: Box::new(grant.scope.clone()),
actual: Box::new(scope.clone()),
});
}
if grant.approved_executor != executing_actor {
return Err(ApprovalTokenError::UnauthorizedDelegate {
expected: grant.approved_executor.clone(),
actual: executing_actor.to_string(),
});
}
Ok(())
}
fn audit_for(grant: &ApprovalTokenGrant, executing_actor: &str) -> ApprovalTokenAudit {
let mut delegation_chain = grant.delegation_chain.clone();
if delegation_chain.is_empty() {
delegation_chain.push(ApprovalDelegationHop::new(
grant.approving_actor.clone(),
"approval granted",
));
}
if grant.approving_actor != executing_actor
&& !delegation_chain
.iter()
.any(|hop| hop.actor == executing_actor)
{
delegation_chain.push(ApprovalDelegationHop::new(
executing_actor.to_string(),
"delegated execution",
));
}
ApprovalTokenAudit {
token: grant.token.clone(),
scope: grant.scope.clone(),
approving_actor: grant.approving_actor.clone(),
executing_actor: executing_actor.to_string(),
status: grant.status,
delegated_execution: grant.approving_actor != executing_actor,
delegation_chain,
uses: grant.uses,
max_uses: grant.max_uses,
}
}
}
#[cfg(test)]
mod tests {
use super::{
ApprovalDelegationHop, ApprovalScope, ApprovalTokenError, ApprovalTokenGrant,
ApprovalTokenLedger, ApprovalTokenStatus,
};
#[test]
fn approval_token_blocks_until_owner_grants_policy_exception() {
let mut ledger = ApprovalTokenLedger::new();
let scope = ApprovalScope::new("main_push_forbidden", "git push")
.with_repository("sisyphus/claw-code")
.with_branch("main");
ledger.insert(ApprovalTokenGrant::pending(
"tok-pending",
scope.clone(),
"repo-owner",
"release-bot",
));
assert!(matches!(
ledger.verify("tok-missing", &scope, "release-bot", 10),
Err(ApprovalTokenError::NoApproval)
));
assert!(matches!(
ledger.verify("tok-pending", &scope, "release-bot", 10),
Err(ApprovalTokenError::ApprovalPending)
));
ledger.insert(ApprovalTokenGrant::granted(
"tok-granted",
scope.clone(),
"repo-owner",
"release-bot",
));
let audit = ledger
.verify("tok-granted", &scope, "release-bot", 10)
.expect("owner approval should verify");
assert_eq!(audit.status, ApprovalTokenStatus::Granted);
assert_eq!(audit.approving_actor, "repo-owner");
assert_eq!(audit.executing_actor, "release-bot");
assert!(audit.delegated_execution);
}
#[test]
fn approval_token_is_one_time_use_and_rejects_replay() {
let mut ledger = ApprovalTokenLedger::new();
let scope = ApprovalScope::new("release_requires_owner", "release publish")
.with_repository("sisyphus/claw-code");
ledger.insert(ApprovalTokenGrant::granted(
"tok-once",
scope.clone(),
"owner",
"release-bot",
));
let first = ledger
.consume("tok-once", &scope, "release-bot", 10)
.expect("first use should consume token");
assert_eq!(first.status, ApprovalTokenStatus::Consumed);
assert_eq!(first.uses, 1);
assert!(matches!(
ledger.consume("tok-once", &scope, "release-bot", 11),
Err(ApprovalTokenError::ApprovalAlreadyConsumed)
));
assert_eq!(
ledger.get("tok-once").map(|grant| grant.status),
Some(ApprovalTokenStatus::Consumed)
);
}
#[test]
fn approval_token_rejects_scope_expansion_expiry_and_revocation() {
let mut ledger = ApprovalTokenLedger::new();
let scope = ApprovalScope::new("main_push_forbidden", "git push")
.with_repository("sisyphus/claw-code")
.with_branch("main");
let dev_scope = ApprovalScope::new("main_push_forbidden", "git push")
.with_repository("sisyphus/claw-code")
.with_branch("dev");
ledger.insert(
ApprovalTokenGrant::granted("tok-expiring", scope.clone(), "owner", "bot")
.expires_at(20),
);
assert!(matches!(
ledger.verify("tok-expiring", &dev_scope, "bot", 10),
Err(ApprovalTokenError::ScopeMismatch { .. })
));
assert!(matches!(
ledger.verify("tok-expiring", &scope, "bot", 21),
Err(ApprovalTokenError::ApprovalExpired)
));
ledger.insert(ApprovalTokenGrant::granted(
"tok-revoked",
scope.clone(),
"owner",
"bot",
));
let revoked = ledger
.revoke("tok-revoked")
.expect("revocation should be audited");
assert_eq!(revoked.status, ApprovalTokenStatus::Revoked);
assert!(matches!(
ledger.verify("tok-revoked", &scope, "bot", 10),
Err(ApprovalTokenError::ApprovalRevoked)
));
}
#[test]
fn approval_token_preserves_delegation_traceability() {
let mut ledger = ApprovalTokenLedger::new();
let scope = ApprovalScope::new("deploy_requires_owner", "deploy prod");
ledger.insert(
ApprovalTokenGrant::granted("tok-delegated", scope.clone(), "owner", "deploy-bot")
.with_delegation_hop(
ApprovalDelegationHop::new("owner", "owner approval")
.with_session_id("session-owner"),
)
.with_delegation_hop(
ApprovalDelegationHop::new("lead-agent", "handoff to deploy bot")
.with_session_id("session-lead"),
),
);
assert!(matches!(
ledger.verify("tok-delegated", &scope, "unexpected-bot", 10),
Err(ApprovalTokenError::UnauthorizedDelegate { expected, actual })
if expected == "deploy-bot" && actual == "unexpected-bot"
));
let audit = ledger
.consume("tok-delegated", &scope, "deploy-bot", 10)
.expect("approved delegate should consume token");
let actors = audit
.delegation_chain
.iter()
.map(|hop| hop.actor.as_str())
.collect::<Vec<_>>();
assert!(audit.delegated_execution);
assert_eq!(actors, vec!["owner", "lead-agent", "deploy-bot"]);
assert_eq!(
audit.delegation_chain[0].session_id.as_deref(),
Some("session-owner")
);
assert_eq!(
audit.delegation_chain[1].session_id.as_deref(),
Some("session-lead")
);
}
}

View File

@@ -212,7 +212,7 @@ fn summarize_messages(messages: &[ConversationMessage]) -> String {
.filter_map(|block| match block { .filter_map(|block| match block {
ContentBlock::ToolUse { name, .. } => Some(name.as_str()), ContentBlock::ToolUse { name, .. } => Some(name.as_str()),
ContentBlock::ToolResult { tool_name, .. } => Some(tool_name.as_str()), ContentBlock::ToolResult { tool_name, .. } => Some(tool_name.as_str()),
ContentBlock::Text { .. } => None, ContentBlock::Text { .. } | ContentBlock::Thinking { .. } => None,
}) })
.collect::<Vec<_>>(); .collect::<Vec<_>>();
tool_names.sort_unstable(); tool_names.sort_unstable();
@@ -317,6 +317,9 @@ fn merge_compact_summaries(existing_summary: Option<&str>, new_summary: &str) ->
fn summarize_block(block: &ContentBlock) -> String { fn summarize_block(block: &ContentBlock) -> String {
let raw = match block { let raw = match block {
ContentBlock::Text { text } => text.clone(), ContentBlock::Text { text } => text.clone(),
ContentBlock::Thinking { thinking, .. } => {
format!("thinking ({} chars)", thinking.chars().count())
}
ContentBlock::ToolUse { name, input, .. } => format!("tool_use {name}({input})"), ContentBlock::ToolUse { name, input, .. } => format!("tool_use {name}({input})"),
ContentBlock::ToolResult { ContentBlock::ToolResult {
tool_name, tool_name,
@@ -378,6 +381,7 @@ fn collect_key_files(messages: &[ConversationMessage]) -> Vec<String> {
ContentBlock::Text { text } => text.as_str(), ContentBlock::Text { text } => text.as_str(),
ContentBlock::ToolUse { input, .. } => input.as_str(), ContentBlock::ToolUse { input, .. } => input.as_str(),
ContentBlock::ToolResult { output, .. } => output.as_str(), ContentBlock::ToolResult { output, .. } => output.as_str(),
ContentBlock::Thinking { thinking, .. } => thinking.as_str(),
}) })
.flat_map(extract_file_candidates) .flat_map(extract_file_candidates)
.collect::<Vec<_>>(); .collect::<Vec<_>>();
@@ -400,6 +404,7 @@ fn first_text_block(message: &ConversationMessage) -> Option<&str> {
ContentBlock::Text { text } if !text.trim().is_empty() => Some(text.as_str()), ContentBlock::Text { text } if !text.trim().is_empty() => Some(text.as_str()),
ContentBlock::ToolUse { .. } ContentBlock::ToolUse { .. }
| ContentBlock::ToolResult { .. } | ContentBlock::ToolResult { .. }
| ContentBlock::Thinking { .. }
| ContentBlock::Text { .. } => None, | ContentBlock::Text { .. } => None,
}) })
} }
@@ -450,6 +455,10 @@ fn estimate_message_tokens(message: &ConversationMessage) -> usize {
ContentBlock::ToolResult { ContentBlock::ToolResult {
tool_name, output, .. tool_name, output, ..
} => (tool_name.len() + output.len()) / 4 + 1, } => (tool_name.len() + output.len()) / 4 + 1,
ContentBlock::Thinking {
thinking,
signature,
} => thinking.len() / 4 + signature.as_ref().map_or(0, |value| value.len() / 4 + 1),
}) })
.sum() .sum()
} }

View File

@@ -414,6 +414,17 @@ impl RuntimeConfig {
pub fn trusted_roots(&self) -> &[String] { pub fn trusted_roots(&self) -> &[String] {
&self.feature_config.trusted_roots &self.feature_config.trusted_roots
} }
/// Merge config-level default trusted roots with per-call roots.
///
/// Config roots are defaults and are kept first; per-call roots extend the
/// allowlist for a specific worker/session creation request. Duplicates are
/// removed without reordering the first occurrence so evidence remains
/// deterministic while avoiding repeated trust checks.
#[must_use]
pub fn trusted_roots_with_overrides(&self, per_call_roots: &[String]) -> Vec<String> {
merge_trusted_roots(self.trusted_roots(), per_call_roots)
}
} }
impl RuntimeFeatureConfig { impl RuntimeFeatureConfig {
@@ -483,6 +494,22 @@ impl RuntimeFeatureConfig {
pub fn trusted_roots(&self) -> &[String] { pub fn trusted_roots(&self) -> &[String] {
&self.trusted_roots &self.trusted_roots
} }
/// Merge this config's default trusted roots with per-call roots.
#[must_use]
pub fn trusted_roots_with_overrides(&self, per_call_roots: &[String]) -> Vec<String> {
merge_trusted_roots(self.trusted_roots(), per_call_roots)
}
}
fn merge_trusted_roots(config_roots: &[String], per_call_roots: &[String]) -> Vec<String> {
let mut merged = Vec::with_capacity(config_roots.len() + per_call_roots.len());
for root in config_roots.iter().chain(per_call_roots.iter()) {
if !merged.contains(root) {
merged.push(root.clone());
}
}
merged
} }
impl ProviderFallbackConfig { impl ProviderFallbackConfig {
@@ -1245,8 +1272,8 @@ fn push_unique(target: &mut Vec<String>, value: String) {
mod tests { mod tests {
use super::{ use super::{
deep_merge_objects, parse_permission_mode_label, ConfigLoader, ConfigSource, deep_merge_objects, parse_permission_mode_label, ConfigLoader, ConfigSource,
McpServerConfig, McpTransport, ResolvedPermissionMode, RuntimeHookConfig, McpServerConfig, McpTransport, ResolvedPermissionMode, RuntimeFeatureConfig,
RuntimePluginConfig, CLAW_SETTINGS_SCHEMA_NAME, RuntimeHookConfig, RuntimePluginConfig, CLAW_SETTINGS_SCHEMA_NAME,
}; };
use crate::json::JsonValue; use crate::json::JsonValue;
use crate::sandbox::FilesystemIsolationMode; use crate::sandbox::FilesystemIsolationMode;
@@ -1502,6 +1529,51 @@ mod tests {
fs::remove_dir_all(root).expect("cleanup temp dir"); fs::remove_dir_all(root).expect("cleanup temp dir");
} }
#[test]
fn trusted_roots_with_overrides_preserves_config_defaults_and_adds_per_call_roots() {
// given
let root = temp_dir();
let cwd = root.join("project");
let home = root.join("home").join(".claw");
fs::create_dir_all(&home).expect("home config dir");
fs::create_dir_all(&cwd).expect("project dir");
fs::write(
home.join("settings.json"),
r#"{"trustedRoots": ["/tmp/config-default", "/tmp/shared"]}"#,
)
.expect("write settings");
// when
let loaded = ConfigLoader::new(&cwd, &home)
.load()
.expect("config should load");
let merged = loaded.trusted_roots_with_overrides(&[
"/tmp/per-call".to_string(),
"/tmp/shared".to_string(),
]);
// then
assert_eq!(
merged,
["/tmp/config-default", "/tmp/shared", "/tmp/per-call"]
);
fs::remove_dir_all(root).expect("cleanup temp dir");
}
#[test]
fn runtime_feature_trusted_roots_with_overrides_matches_runtime_config_merge() {
let config = RuntimeFeatureConfig {
trusted_roots: vec!["/tmp/config".to_string()],
..RuntimeFeatureConfig::default()
};
assert_eq!(
config.trusted_roots_with_overrides(&["/tmp/per-call".to_string()]),
["/tmp/config", "/tmp/per-call"]
);
}
#[test] #[test]
fn trusted_roots_default_is_empty_when_unset() { fn trusted_roots_default_is_empty_when_unset() {
// given // given

View File

@@ -28,6 +28,10 @@ pub struct ApiRequest {
/// Streamed events emitted while processing a single assistant turn. /// Streamed events emitted while processing a single assistant turn.
#[derive(Debug, Clone, PartialEq, Eq)] #[derive(Debug, Clone, PartialEq, Eq)]
pub enum AssistantEvent { pub enum AssistantEvent {
Thinking {
thinking: String,
signature: Option<String>,
},
TextDelta(String), TextDelta(String),
ToolUse { ToolUse {
id: String, id: String,
@@ -721,6 +725,16 @@ fn build_assistant_message(
for event in events { for event in events {
match event { match event {
AssistantEvent::Thinking {
thinking,
signature,
} => {
flush_text_block(&mut text, &mut blocks);
blocks.push(ContentBlock::Thinking {
thinking,
signature,
});
}
AssistantEvent::TextDelta(delta) => text.push_str(&delta), AssistantEvent::TextDelta(delta) => text.push_str(&delta),
AssistantEvent::ToolUse { id, name, input } => { AssistantEvent::ToolUse { id, name, input } => {
flush_text_block(&mut text, &mut blocks); flush_text_block(&mut text, &mut blocks);
@@ -1723,6 +1737,47 @@ mod tests {
.contains("assistant stream produced no content")); .contains("assistant stream produced no content"));
} }
#[test]
fn build_assistant_message_places_thinking_block_before_text_and_tool_use() {
// given
let events = vec![
AssistantEvent::Thinking {
thinking: "pondering".to_string(),
signature: Some("sig".to_string()),
},
AssistantEvent::TextDelta("hello".to_string()),
AssistantEvent::ToolUse {
id: "tool-1".to_string(),
name: "echo".to_string(),
input: "payload".to_string(),
},
AssistantEvent::MessageStop,
];
// when
let (message, _, _) = build_assistant_message(events)
.expect("assistant message should preserve thinking, text, and tool blocks");
// then
assert_eq!(
message.blocks,
vec![
ContentBlock::Thinking {
thinking: "pondering".to_string(),
signature: Some("sig".to_string()),
},
ContentBlock::Text {
text: "hello".to_string(),
},
ContentBlock::ToolUse {
id: "tool-1".to_string(),
name: "echo".to_string(),
input: "payload".to_string(),
},
]
);
}
#[test] #[test]
fn static_tool_executor_rejects_unknown_tools() { fn static_tool_executor_rejects_unknown_tools() {
// given // given

View File

@@ -307,11 +307,23 @@ pub fn edit_file(
/// Expands a glob pattern and returns matching filenames. /// Expands a glob pattern and returns matching filenames.
pub fn glob_search(pattern: &str, path: Option<&str>) -> io::Result<GlobSearchOutput> { pub fn glob_search(pattern: &str, path: Option<&str>) -> io::Result<GlobSearchOutput> {
glob_search_impl(pattern, path, None)
}
fn glob_search_impl(
pattern: &str,
path: Option<&str>,
workspace_root: Option<&Path>,
) -> io::Result<GlobSearchOutput> {
let started = Instant::now(); let started = Instant::now();
let base_dir = path let base_dir = path
.map(normalize_path) .map(normalize_path)
.transpose()? .transpose()?
.unwrap_or(std::env::current_dir()?); .unwrap_or(std::env::current_dir()?);
let canonical_root = workspace_root.map(canonicalize_workspace_root);
if let Some(root) = canonical_root.as_deref() {
validate_workspace_boundary(&base_dir, root)?;
}
let search_pattern = if Path::new(pattern).is_absolute() { let search_pattern = if Path::new(pattern).is_absolute() {
pattern.to_owned() pattern.to_owned()
} else { } else {
@@ -329,6 +341,12 @@ pub fn glob_search(pattern: &str, path: Option<&str>) -> io::Result<GlobSearchOu
let compiled = Pattern::new(pat) let compiled = Pattern::new(pat)
.map_err(|error| io::Error::new(io::ErrorKind::InvalidInput, error.to_string()))?; .map_err(|error| io::Error::new(io::ErrorKind::InvalidInput, error.to_string()))?;
let walk_root = derive_glob_walk_root(pat); let walk_root = derive_glob_walk_root(pat);
if let Some(root) = canonical_root.as_deref() {
let canonical_walk_root = walk_root
.canonicalize()
.unwrap_or_else(|_| walk_root.clone());
validate_workspace_boundary(&canonical_walk_root, root)?;
}
let entries = WalkDir::new(&walk_root) let entries = WalkDir::new(&walk_root)
.into_iter() .into_iter()
.filter_entry(|entry| !should_skip_glob_dir(entry)); .filter_entry(|entry| !should_skip_glob_dir(entry));
@@ -338,6 +356,10 @@ pub fn glob_search(pattern: &str, path: Option<&str>) -> io::Result<GlobSearchOu
&& compiled.matches_path(candidate) && compiled.matches_path(candidate)
&& seen.insert(candidate.to_path_buf()) && seen.insert(candidate.to_path_buf())
{ {
if let Some(root) = canonical_root.as_deref() {
let canonical_candidate = candidate.canonicalize()?;
validate_workspace_boundary(&canonical_candidate, root)?;
}
matches.push(candidate.to_path_buf()); matches.push(candidate.to_path_buf());
} }
} }
@@ -367,12 +389,23 @@ pub fn glob_search(pattern: &str, path: Option<&str>) -> io::Result<GlobSearchOu
/// Runs a regex search over workspace files with optional context lines. /// Runs a regex search over workspace files with optional context lines.
pub fn grep_search(input: &GrepSearchInput) -> io::Result<GrepSearchOutput> { pub fn grep_search(input: &GrepSearchInput) -> io::Result<GrepSearchOutput> {
grep_search_impl(input, None)
}
fn grep_search_impl(
input: &GrepSearchInput,
workspace_root: Option<&Path>,
) -> io::Result<GrepSearchOutput> {
let base_path = input let base_path = input
.path .path
.as_deref() .as_deref()
.map(normalize_path) .map(normalize_path)
.transpose()? .transpose()?
.unwrap_or(std::env::current_dir()?); .unwrap_or(std::env::current_dir()?);
let canonical_root = workspace_root.map(canonicalize_workspace_root);
if let Some(root) = canonical_root.as_deref() {
validate_workspace_boundary(&base_path, root)?;
}
let regex = RegexBuilder::new(&input.pattern) let regex = RegexBuilder::new(&input.pattern)
.case_insensitive(input.case_insensitive.unwrap_or(false)) .case_insensitive(input.case_insensitive.unwrap_or(false))
@@ -398,6 +431,10 @@ pub fn grep_search(input: &GrepSearchInput) -> io::Result<GrepSearchOutput> {
let mut total_matches = 0usize; let mut total_matches = 0usize;
for file_path in collect_search_files(&base_path)? { for file_path in collect_search_files(&base_path)? {
if let Some(root) = canonical_root.as_deref() {
let canonical_file = file_path.canonicalize()?;
validate_workspace_boundary(&canonical_file, root)?;
}
if !matches_optional_filters(&file_path, glob_filter.as_ref(), file_type) { if !matches_optional_filters(&file_path, glob_filter.as_ref(), file_type) {
continue; continue;
} }
@@ -447,27 +484,21 @@ pub fn grep_search(input: &GrepSearchInput) -> io::Result<GrepSearchOutput> {
let (filenames, applied_limit, applied_offset) = let (filenames, applied_limit, applied_offset) =
apply_limit(filenames, input.head_limit, input.offset); apply_limit(filenames, input.head_limit, input.offset);
let content_output = if output_mode == "content" { if output_mode == "content" {
let (lines, limit, offset) = apply_limit(content_lines, input.head_limit, input.offset); return Ok(build_grep_content_output(
return Ok(GrepSearchOutput { output_mode,
mode: Some(output_mode),
num_files: filenames.len(),
filenames, filenames,
num_lines: Some(lines.len()), content_lines,
content: Some(lines.join("\n")), input.head_limit,
num_matches: None, input.offset,
applied_limit: limit, ));
applied_offset: offset, }
});
} else {
None
};
Ok(GrepSearchOutput { Ok(GrepSearchOutput {
mode: Some(output_mode.clone()), mode: Some(output_mode.clone()),
num_files: filenames.len(), num_files: filenames.len(),
filenames, filenames,
content: content_output, content: None,
num_lines: None, num_lines: None,
num_matches: (output_mode == "count").then_some(total_matches), num_matches: (output_mode == "count").then_some(total_matches),
applied_limit, applied_limit,
@@ -475,6 +506,32 @@ pub fn grep_search(input: &GrepSearchInput) -> io::Result<GrepSearchOutput> {
}) })
} }
fn build_grep_content_output(
output_mode: String,
filenames: Vec<String>,
content_lines: Vec<String>,
head_limit: Option<usize>,
offset: Option<usize>,
) -> GrepSearchOutput {
let (lines, limit, offset) = apply_limit(content_lines, head_limit, offset);
GrepSearchOutput {
mode: Some(output_mode),
num_files: filenames.len(),
filenames,
num_lines: Some(lines.len()),
content: Some(lines.join("\n")),
num_matches: None,
applied_limit: limit,
applied_offset: offset,
}
}
fn canonicalize_workspace_root(workspace_root: &Path) -> PathBuf {
workspace_root
.canonicalize()
.unwrap_or_else(|_| workspace_root.to_path_buf())
}
fn should_skip_glob_dir(entry: &DirEntry) -> bool { fn should_skip_glob_dir(entry: &DirEntry) -> bool {
entry.file_type().is_dir() entry.file_type().is_dir()
&& entry && entry
@@ -625,9 +682,7 @@ pub fn read_file_in_workspace(
workspace_root: &Path, workspace_root: &Path,
) -> io::Result<ReadFileOutput> { ) -> io::Result<ReadFileOutput> {
let absolute_path = normalize_path(path)?; let absolute_path = normalize_path(path)?;
let canonical_root = workspace_root let canonical_root = canonicalize_workspace_root(workspace_root);
.canonicalize()
.unwrap_or_else(|_| workspace_root.to_path_buf());
validate_workspace_boundary(&absolute_path, &canonical_root)?; validate_workspace_boundary(&absolute_path, &canonical_root)?;
read_file(path, offset, limit) read_file(path, offset, limit)
} }
@@ -640,9 +695,7 @@ pub fn write_file_in_workspace(
workspace_root: &Path, workspace_root: &Path,
) -> io::Result<WriteFileOutput> { ) -> io::Result<WriteFileOutput> {
let absolute_path = normalize_path_allow_missing(path)?; let absolute_path = normalize_path_allow_missing(path)?;
let canonical_root = workspace_root let canonical_root = canonicalize_workspace_root(workspace_root);
.canonicalize()
.unwrap_or_else(|_| workspace_root.to_path_buf());
validate_workspace_boundary(&absolute_path, &canonical_root)?; validate_workspace_boundary(&absolute_path, &canonical_root)?;
write_file(path, content) write_file(path, content)
} }
@@ -657,13 +710,30 @@ pub fn edit_file_in_workspace(
workspace_root: &Path, workspace_root: &Path,
) -> io::Result<EditFileOutput> { ) -> io::Result<EditFileOutput> {
let absolute_path = normalize_path(path)?; let absolute_path = normalize_path(path)?;
let canonical_root = workspace_root let canonical_root = canonicalize_workspace_root(workspace_root);
.canonicalize()
.unwrap_or_else(|_| workspace_root.to_path_buf());
validate_workspace_boundary(&absolute_path, &canonical_root)?; validate_workspace_boundary(&absolute_path, &canonical_root)?;
edit_file(path, old_string, new_string, replace_all) edit_file(path, old_string, new_string, replace_all)
} }
/// Expand a glob pattern with workspace boundary enforcement.
#[allow(dead_code)]
pub fn glob_search_in_workspace(
pattern: &str,
path: Option<&str>,
workspace_root: &Path,
) -> io::Result<GlobSearchOutput> {
glob_search_impl(pattern, path, Some(workspace_root))
}
/// Search file contents with workspace boundary enforcement.
#[allow(dead_code)]
pub fn grep_search_in_workspace(
input: &GrepSearchInput,
workspace_root: &Path,
) -> io::Result<GrepSearchOutput> {
grep_search_impl(input, Some(workspace_root))
}
/// Check whether a path is a symlink that resolves outside the workspace. /// Check whether a path is a symlink that resolves outside the workspace.
#[allow(dead_code)] #[allow(dead_code)]
pub fn is_symlink_escape(path: &Path, workspace_root: &Path) -> io::Result<bool> { pub fn is_symlink_escape(path: &Path, workspace_root: &Path) -> io::Result<bool> {
@@ -708,7 +778,7 @@ mod tests {
use super::{ use super::{
component_contains_glob, derive_glob_walk_root, edit_file, expand_braces, glob_search, component_contains_glob, derive_glob_walk_root, edit_file, expand_braces, glob_search,
grep_search, is_symlink_escape, read_file, read_file_in_workspace, write_file, grep_search, is_symlink_escape, read_file, read_file_in_workspace, write_file,
GrepSearchInput, MAX_WRITE_SIZE, write_file_in_workspace, GrepSearchInput, MAX_WRITE_SIZE,
}; };
fn temp_path(name: &str) -> std::path::PathBuf { fn temp_path(name: &str) -> std::path::PathBuf {
@@ -808,6 +878,68 @@ mod tests {
assert!(!is_symlink_escape(&normal, &workspace).expect("check should succeed")); assert!(!is_symlink_escape(&normal, &workspace).expect("check should succeed"));
} }
#[test]
#[cfg(unix)]
fn workspace_read_rejects_symlink_escape_regression_3007_class() {
let workspace = temp_path("workspace-read-symlink-escape");
let outside = temp_path("workspace-read-symlink-target");
std::fs::create_dir_all(&workspace).expect("workspace dir should be created");
std::fs::create_dir_all(&outside).expect("outside dir should be created");
let outside_file = outside.join("secret.txt");
std::fs::write(&outside_file, "outside secret").expect("outside file should write");
let link_path = workspace.join("linked-secret.txt");
std::os::unix::fs::symlink(&outside_file, &link_path).expect("symlink should create");
let result =
read_file_in_workspace(link_path.to_string_lossy().as_ref(), None, None, &workspace);
assert!(result.is_err(), "symlink escape must be rejected");
let error = result.unwrap_err();
assert_eq!(error.kind(), std::io::ErrorKind::PermissionDenied);
assert!(
error.to_string().contains("escapes workspace"),
"error should explain workspace escape: {error}"
);
let _ = std::fs::remove_dir_all(&workspace);
let _ = std::fs::remove_dir_all(&outside);
}
#[test]
#[cfg(unix)]
fn workspace_write_rejects_parent_symlink_escape_regression_3007_class() {
let workspace = temp_path("workspace-write-symlink-escape");
let outside = temp_path("workspace-write-symlink-target");
std::fs::create_dir_all(&workspace).expect("workspace dir should be created");
std::fs::create_dir_all(&outside).expect("outside dir should be created");
let link_dir = workspace.join("linked-outside");
std::os::unix::fs::symlink(&outside, &link_dir).expect("symlink dir should create");
let escaped_child = link_dir.join("created.txt");
let result = write_file_in_workspace(
escaped_child.to_string_lossy().as_ref(),
"must not escape",
&workspace,
);
assert!(result.is_err(), "parent symlink escape must be rejected");
let error = result.unwrap_err();
assert_eq!(error.kind(), std::io::ErrorKind::PermissionDenied);
assert!(
error.to_string().contains("escapes workspace"),
"error should explain workspace escape: {error}"
);
assert!(
!outside.join("created.txt").exists(),
"write should not create through an escaping symlink"
);
let _ = std::fs::remove_dir_all(&workspace);
let _ = std::fs::remove_dir_all(&outside);
}
#[test] #[test]
fn globs_and_greps_directory() { fn globs_and_greps_directory() {
let dir = temp_path("search-dir"); let dir = temp_path("search-dir");

View File

@@ -0,0 +1,399 @@
//! Machine-checkable conformance helpers for G004 event/report contract bundles.
//!
//! The harness intentionally validates JSON-shaped artifacts instead of owning the
//! lane-event, report, or approval-token implementations. This keeps it usable by
//! independent implementation lanes and by golden fixtures produced outside the
//! runtime crate.
use serde_json::Value;
const BUNDLE_SCHEMA_VERSION: &str = "g004.contract.bundle.v1";
const REPORT_SCHEMA_VERSION: &str = "g004.report.v1";
/// A single conformance validation failure.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct G004ConformanceError {
/// JSON pointer-ish path to the invalid field.
pub path: String,
/// Human-readable reason the field failed validation.
pub message: String,
}
impl G004ConformanceError {
fn new(path: impl Into<String>, message: impl Into<String>) -> Self {
Self {
path: path.into(),
message: message.into(),
}
}
}
/// Validate a G004 golden contract bundle.
///
/// The bundle shape is deliberately small and cross-lane:
/// - `laneEvents[]` must expose stable event identity, ordering/provenance, and
/// terminal dedupe fingerprints.
/// - `reports[]` must expose schema identity, content hash, projection/redaction
/// provenance, capability negotiation, fact/hypothesis/negative-evidence
/// labels, confidence, and field-level delta attribution.
/// - `approvalTokens[]` must expose owner/scope, delegation chain, one-time-use,
/// and replay-prevention fields.
#[must_use]
pub fn validate_g004_contract_bundle(bundle: &Value) -> Vec<G004ConformanceError> {
let mut errors = Vec::new();
require_string_eq(bundle, "/schemaVersion", BUNDLE_SCHEMA_VERSION, &mut errors);
validate_lane_events(bundle.get("laneEvents"), "/laneEvents", &mut errors);
validate_reports(bundle.get("reports"), "/reports", &mut errors);
validate_approval_tokens(bundle.get("approvalTokens"), "/approvalTokens", &mut errors);
errors
}
#[must_use]
pub fn is_g004_contract_bundle_valid(bundle: &Value) -> bool {
validate_g004_contract_bundle(bundle).is_empty()
}
fn validate_lane_events(value: Option<&Value>, path: &str, errors: &mut Vec<G004ConformanceError>) {
let Some(events) = non_empty_array(value, path, errors) else {
return;
};
let mut previous_seq = None;
for (index, event) in events.iter().enumerate() {
let base = format!("{path}/{index}");
require_non_empty_string_at(event, "/event", &format!("{base}/event"), errors);
require_non_empty_string_at(event, "/status", &format!("{base}/status"), errors);
require_non_empty_string_at(event, "/emittedAt", &format!("{base}/emittedAt"), errors);
require_non_empty_string_at(
event,
"/metadata/provenance",
&format!("{base}/metadata/provenance"),
errors,
);
require_non_empty_string_at(
event,
"/metadata/emitterIdentity",
&format!("{base}/metadata/emitterIdentity"),
errors,
);
require_non_empty_string_at(
event,
"/metadata/environmentLabel",
&format!("{base}/metadata/environmentLabel"),
errors,
);
match get_path(event, "/metadata/seq").and_then(Value::as_u64) {
Some(seq) => {
if let Some(previous) = previous_seq {
if seq <= previous {
errors.push(G004ConformanceError::new(
format!("{base}/metadata/seq"),
"sequence must be strictly increasing",
));
}
}
previous_seq = Some(seq);
}
None => errors.push(G004ConformanceError::new(
format!("{base}/metadata/seq"),
"required u64 field missing",
)),
}
if is_terminal_event_value(event.get("event")) {
require_non_empty_string_at(
event,
"/metadata/eventFingerprint",
&format!("{base}/metadata/eventFingerprint"),
errors,
);
}
}
}
fn validate_reports(value: Option<&Value>, path: &str, errors: &mut Vec<G004ConformanceError>) {
let Some(reports) = non_empty_array(value, path, errors) else {
return;
};
for (index, report) in reports.iter().enumerate() {
let base = format!("{path}/{index}");
require_string_eq_at(
report,
"/schemaVersion",
&format!("{base}/schemaVersion"),
REPORT_SCHEMA_VERSION,
errors,
);
require_non_empty_string_at(report, "/reportId", &format!("{base}/reportId"), errors);
require_non_empty_string_at(
report,
"/identity/contentHash",
&format!("{base}/identity/contentHash"),
errors,
);
require_non_empty_string_at(
report,
"/projection/provenance",
&format!("{base}/projection/provenance"),
errors,
);
require_non_empty_string_at(
report,
"/redaction/provenance",
&format!("{base}/redaction/provenance"),
errors,
);
non_empty_array(
get_path(report, "/consumerCapabilities"),
&format!("{base}/consumerCapabilities"),
errors,
);
validate_findings(
get_path(report, "/findings"),
&format!("{base}/findings"),
errors,
);
validate_field_deltas(
get_path(report, "/fieldDeltas"),
&format!("{base}/fieldDeltas"),
errors,
);
}
}
fn validate_findings(value: Option<&Value>, path: &str, errors: &mut Vec<G004ConformanceError>) {
let Some(findings) = non_empty_array(value, path, errors) else {
return;
};
for (index, finding) in findings.iter().enumerate() {
let base = format!("{path}/{index}");
require_one_of_at(
finding,
"/kind",
&format!("{base}/kind"),
&["fact", "hypothesis", "negative_evidence"],
errors,
);
require_one_of_at(
finding,
"/confidence",
&format!("{base}/confidence"),
&["low", "medium", "high"],
errors,
);
require_non_empty_string_at(finding, "/statement", &format!("{base}/statement"), errors);
}
}
fn validate_field_deltas(
value: Option<&Value>,
path: &str,
errors: &mut Vec<G004ConformanceError>,
) {
let Some(deltas) = non_empty_array(value, path, errors) else {
return;
};
for (index, delta) in deltas.iter().enumerate() {
let base = format!("{path}/{index}");
require_non_empty_string_at(delta, "/field", &format!("{base}/field"), errors);
require_non_empty_string_at(
delta,
"/previousHash",
&format!("{base}/previousHash"),
errors,
);
require_non_empty_string_at(
delta,
"/currentHash",
&format!("{base}/currentHash"),
errors,
);
require_non_empty_string_at(
delta,
"/attribution",
&format!("{base}/attribution"),
errors,
);
}
}
fn validate_approval_tokens(
value: Option<&Value>,
path: &str,
errors: &mut Vec<G004ConformanceError>,
) {
let Some(tokens) = non_empty_array(value, path, errors) else {
return;
};
for (index, token) in tokens.iter().enumerate() {
let base = format!("{path}/{index}");
require_non_empty_string_at(token, "/tokenId", &format!("{base}/tokenId"), errors);
require_non_empty_string_at(token, "/owner", &format!("{base}/owner"), errors);
require_non_empty_string_at(token, "/scope", &format!("{base}/scope"), errors);
require_non_empty_string_at(token, "/issuedAt", &format!("{base}/issuedAt"), errors);
require_bool_true_at(token, "/oneTimeUse", &format!("{base}/oneTimeUse"), errors);
require_non_empty_string_at(
token,
"/replayPreventionNonce",
&format!("{base}/replayPreventionNonce"),
errors,
);
validate_delegation_chain(
get_path(token, "/delegationChain"),
&format!("{base}/delegationChain"),
errors,
);
}
}
fn validate_delegation_chain(
value: Option<&Value>,
path: &str,
errors: &mut Vec<G004ConformanceError>,
) {
let Some(chain) = non_empty_array(value, path, errors) else {
return;
};
for (index, hop) in chain.iter().enumerate() {
let base = format!("{path}/{index}");
require_non_empty_string_at(hop, "/from", &format!("{base}/from"), errors);
require_non_empty_string_at(hop, "/to", &format!("{base}/to"), errors);
require_non_empty_string_at(hop, "/action", &format!("{base}/action"), errors);
require_non_empty_string_at(hop, "/at", &format!("{base}/at"), errors);
}
}
fn non_empty_array<'a>(
value: Option<&'a Value>,
path: &str,
errors: &mut Vec<G004ConformanceError>,
) -> Option<&'a Vec<Value>> {
match value.and_then(Value::as_array) {
Some(array) if !array.is_empty() => Some(array),
Some(_) => {
errors.push(G004ConformanceError::new(path, "array must not be empty"));
None
}
None => {
errors.push(G004ConformanceError::new(
path,
"required array field missing",
));
None
}
}
}
fn require_string_eq(
root: &Value,
path: &str,
expected: &str,
errors: &mut Vec<G004ConformanceError>,
) {
require_string_eq_at(root, path, path, expected, errors);
}
fn require_string_eq_at(
root: &Value,
pointer: &str,
error_path: &str,
expected: &str,
errors: &mut Vec<G004ConformanceError>,
) {
match get_path(root, pointer).and_then(Value::as_str) {
Some(actual) if actual == expected => {}
Some(actual) => errors.push(G004ConformanceError::new(
error_path,
format!("expected '{expected}', got '{actual}'"),
)),
None => errors.push(G004ConformanceError::new(
error_path,
"required string field missing",
)),
}
}
fn require_non_empty_string_at(
root: &Value,
pointer: &str,
error_path: &str,
errors: &mut Vec<G004ConformanceError>,
) {
match get_path(root, pointer).and_then(Value::as_str) {
Some(value) if !value.trim().is_empty() => {}
Some(_) => errors.push(G004ConformanceError::new(
error_path,
"string must not be empty",
)),
None => errors.push(G004ConformanceError::new(
error_path,
"required string field missing",
)),
}
}
fn require_one_of_at(
root: &Value,
pointer: &str,
error_path: &str,
allowed: &[&str],
errors: &mut Vec<G004ConformanceError>,
) {
match get_path(root, pointer).and_then(Value::as_str) {
Some(value) if allowed.contains(&value) => {}
Some(value) => errors.push(G004ConformanceError::new(
error_path,
format!("'{value}' is not one of {}", allowed.join(", ")),
)),
None => errors.push(G004ConformanceError::new(
error_path,
"required string field missing",
)),
}
}
fn require_bool_true_at(
root: &Value,
pointer: &str,
error_path: &str,
errors: &mut Vec<G004ConformanceError>,
) {
match get_path(root, pointer).and_then(Value::as_bool) {
Some(true) => {}
Some(false) => errors.push(G004ConformanceError::new(error_path, "must be true")),
None => errors.push(G004ConformanceError::new(
error_path,
"required boolean field missing",
)),
}
}
fn is_terminal_event_value(value: Option<&Value>) -> bool {
matches!(
value.and_then(Value::as_str),
Some("lane.finished" | "lane.failed" | "lane.merged" | "lane.superseded" | "lane.closed")
)
}
fn get_path<'a>(root: &'a Value, path: &str) -> Option<&'a Value> {
if let Some(value) = root.pointer(path) {
return Some(value);
}
let segments = path.trim_start_matches('/').split('/').collect::<Vec<_>>();
for index in 1..segments.len() {
let relative = format!("/{}", segments[index..].join("/"));
if let Some(value) = root.pointer(&relative) {
return Some(value);
}
}
None
}

View File

@@ -449,18 +449,21 @@ pub fn compute_event_fingerprint(
status: &LaneEventStatus, status: &LaneEventStatus,
data: Option<&serde_json::Value>, data: Option<&serde_json::Value>,
) -> String { ) -> String {
use std::collections::hash_map::DefaultHasher; use sha2::{Digest, Sha256};
use std::hash::{Hash, Hasher};
let mut hasher = DefaultHasher::new(); let payload = serde_json::json!({
format!("{event:?}").hash(&mut hasher); "event": event,
format!("{status:?}").hash(&mut hasher); "status": status,
if let Some(d) = data { "data": data,
serde_json::to_string(d) });
.unwrap_or_default() let canonical = serde_json::to_vec(&payload).unwrap_or_default();
.hash(&mut hasher); let digest = Sha256::digest(canonical);
let mut fingerprint = String::with_capacity(16);
for byte in &digest[..8] {
use std::fmt::Write as _;
write!(&mut fingerprint, "{byte:02x}").expect("writing to String should not fail");
} }
format!("{:016x}", hasher.finish()) fingerprint
} }
/// Classification of event terminality for reconciliation. /// Classification of event terminality for reconciliation.
@@ -1045,6 +1048,7 @@ impl LaneEvent {
emitted_at, emitted_at,
) )
.with_optional_detail(detail) .with_optional_detail(detail)
.with_terminal_fingerprint()
} }
#[must_use] #[must_use]
@@ -1098,7 +1102,7 @@ impl LaneEvent {
event = event =
event.with_data(serde_json::to_value(subphase).expect("subphase should serialize")); event.with_data(serde_json::to_value(subphase).expect("subphase should serialize"));
} }
event event.with_terminal_fingerprint()
} }
/// Ship prepared — §4.44.5 /// Ship prepared — §4.44.5
@@ -1170,6 +1174,21 @@ impl LaneEvent {
#[must_use] #[must_use]
pub fn with_data(mut self, data: Value) -> Self { pub fn with_data(mut self, data: Value) -> Self {
self.data = Some(data); self.data = Some(data);
if is_terminal_event(self.event) {
self = self.with_terminal_fingerprint();
}
self
}
#[must_use]
fn with_terminal_fingerprint(mut self) -> Self {
if is_terminal_event(self.event) {
self.metadata.event_fingerprint = Some(compute_event_fingerprint(
&self.event,
&self.status,
self.data.as_ref(),
));
}
self self
} }
} }
@@ -1375,6 +1394,39 @@ mod tests {
assert_eq!(round_trip.event, LaneEventName::ShipPushedMain); assert_eq!(round_trip.event, LaneEventName::ShipPushedMain);
} }
#[test]
fn convenience_terminal_events_attach_and_refresh_fingerprints() {
let finished = LaneEvent::finished("2026-04-04T00:00:00Z", Some("done".to_string()));
let initial_fingerprint = finished
.metadata
.event_fingerprint
.clone()
.expect("finished events should carry terminal fingerprint");
let with_payload = finished.with_data(json!({"result": "ok", "attempt": 1}));
assert!(with_payload.metadata.event_fingerprint.is_some());
assert_ne!(
Some(initial_fingerprint),
with_payload.metadata.event_fingerprint,
"payload changes must refresh the actionable terminal fingerprint"
);
}
#[test]
fn tool_style_finished_events_dedupe_after_payload_is_added() {
let first = LaneEvent::finished("2026-04-04T00:00:00Z", Some("done".to_string()))
.with_data(json!({"result": "ok"}));
let duplicate = LaneEvent::finished("2026-04-04T00:00:01Z", Some("done again".to_string()))
.with_data(json!({"result": "ok"}));
assert_eq!(
first.metadata.event_fingerprint,
duplicate.metadata.event_fingerprint
);
let deduped = dedupe_terminal_events(&[first, duplicate]);
assert_eq!(deduped.len(), 1);
}
#[test] #[test]
fn commit_events_can_carry_worktree_and_supersession_metadata() { fn commit_events_can_carry_worktree_and_supersession_metadata() {
let event = LaneEvent::commit_created( let event = LaneEvent::commit_created(

View File

@@ -4,6 +4,7 @@
//! MCP plumbing, tool-facing file operations, and the core conversation loop //! MCP plumbing, tool-facing file operations, and the core conversation loop
//! that drives interactive and one-shot turns. //! that drives interactive and one-shot turns.
mod approval_tokens;
mod bash; mod bash;
pub mod bash_validation; pub mod bash_validation;
mod bootstrap; mod bootstrap;
@@ -13,6 +14,7 @@ mod config;
pub mod config_validate; pub mod config_validate;
mod conversation; mod conversation;
mod file_ops; mod file_ops;
pub mod g004_conformance;
mod git_context; mod git_context;
pub mod green_contract; pub mod green_contract;
mod hooks; mod hooks;
@@ -33,6 +35,7 @@ mod policy_engine;
mod prompt; mod prompt;
pub mod recovery_recipes; pub mod recovery_recipes;
mod remote; mod remote;
mod report_schema;
pub mod sandbox; pub mod sandbox;
mod session; mod session;
pub mod session_control; pub mod session_control;
@@ -49,6 +52,10 @@ mod trust_resolver;
mod usage; mod usage;
pub mod worker_boot; pub mod worker_boot;
pub use approval_tokens::{
ApprovalDelegationHop, ApprovalScope, ApprovalTokenAudit, ApprovalTokenError,
ApprovalTokenGrant, ApprovalTokenLedger, ApprovalTokenStatus,
};
pub use bash::{execute_bash, BashCommandInput, BashCommandOutput}; pub use bash::{execute_bash, BashCommandInput, BashCommandOutput};
pub use bootstrap::{BootstrapPhase, BootstrapPlan}; pub use bootstrap::{BootstrapPhase, BootstrapPlan};
pub use branch_lock::{detect_branch_lock_collisions, BranchLockCollision, BranchLockIntent}; pub use branch_lock::{detect_branch_lock_collisions, BranchLockCollision, BranchLockIntent};
@@ -74,9 +81,10 @@ pub use conversation::{
ToolExecutor, TurnSummary, ToolExecutor, TurnSummary,
}; };
pub use file_ops::{ pub use file_ops::{
edit_file, glob_search, grep_search, read_file, write_file, EditFileOutput, GlobSearchOutput, edit_file, edit_file_in_workspace, glob_search, glob_search_in_workspace, grep_search,
GrepSearchInput, GrepSearchOutput, ReadFileOutput, StructuredPatchHunk, TextFilePayload, grep_search_in_workspace, read_file, read_file_in_workspace, write_file,
WriteFileOutput, write_file_in_workspace, EditFileOutput, GlobSearchOutput, GrepSearchInput, GrepSearchOutput,
ReadFileOutput, StructuredPatchHunk, TextFilePayload, WriteFileOutput,
}; };
pub use git_context::{GitCommitEntry, GitContext}; pub use git_context::{GitCommitEntry, GitContext};
pub use hooks::{ pub use hooks::{
@@ -131,8 +139,8 @@ pub use policy_engine::{
PolicyEngine, PolicyRule, ReconcileReason, ReviewStatus, PolicyEngine, PolicyRule, ReconcileReason, ReviewStatus,
}; };
pub use prompt::{ pub use prompt::{
load_system_prompt, prepend_bullets, ContextFile, ProjectContext, PromptBuildError, load_system_prompt, prepend_bullets, ContextFile, ModelFamilyIdentity, ProjectContext,
SystemPromptBuilder, FRONTIER_MODEL_NAME, SYSTEM_PROMPT_DYNAMIC_BOUNDARY, PromptBuildError, SystemPromptBuilder, FRONTIER_MODEL_NAME, SYSTEM_PROMPT_DYNAMIC_BOUNDARY,
}; };
pub use recovery_recipes::{ pub use recovery_recipes::{
attempt_recovery, recipe_for, EscalationPolicy, FailureScenario, RecoveryContext, attempt_recovery, recipe_for, EscalationPolicy, FailureScenario, RecoveryContext,
@@ -143,6 +151,13 @@ pub use remote::{
RemoteSessionContext, UpstreamProxyBootstrap, UpstreamProxyState, DEFAULT_REMOTE_BASE_URL, RemoteSessionContext, UpstreamProxyBootstrap, UpstreamProxyState, DEFAULT_REMOTE_BASE_URL,
DEFAULT_SESSION_TOKEN_PATH, DEFAULT_SYSTEM_CA_BUNDLE, NO_PROXY_HOSTS, UPSTREAM_PROXY_ENV_KEYS, DEFAULT_SESSION_TOKEN_PATH, DEFAULT_SYSTEM_CA_BUNDLE, NO_PROXY_HOSTS, UPSTREAM_PROXY_ENV_KEYS,
}; };
pub use report_schema::{
canonicalize_report, project_report, report_content_hash, report_schema_v1_registry,
CanonicalReportV1, ClaimKind, ConsumerCapabilities, FieldDelta, FieldDeltaState,
NegativeEvidence, NegativeFindingStatus, ProjectionProvenance, RedactionProvenance,
ReportClaim, ReportConfidence, ReportIdentity, ReportProjectionV1, ReportSchemaField,
ReportSchemaRegistry, SensitivityClass, DEFAULT_PROJECTION_POLICY_V1, REPORT_SCHEMA_V1,
};
pub use sandbox::{ pub use sandbox::{
build_linux_sandbox_command, detect_container_environment, detect_container_environment_from, build_linux_sandbox_command, detect_container_environment, detect_container_environment_from,
resolve_sandbox_status, resolve_sandbox_status_for_request, ContainerEnvironment, resolve_sandbox_status, resolve_sandbox_status_for_request, ContainerEnvironment,

View File

@@ -2,7 +2,7 @@ use std::time::Duration;
pub type GreenLevel = u8; pub type GreenLevel = u8;
const STALE_BRANCH_THRESHOLD: Duration = Duration::from_secs(60 * 60); const STALE_BRANCH_THRESHOLD: Duration = Duration::from_hours(1);
#[derive(Debug, Clone, PartialEq, Eq)] #[derive(Debug, Clone, PartialEq, Eq)]
pub struct PolicyRule { pub struct PolicyRule {

View File

@@ -43,6 +43,24 @@ pub const FRONTIER_MODEL_NAME: &str = "Claude Opus 4.6";
const MAX_INSTRUCTION_FILE_CHARS: usize = 4_000; const MAX_INSTRUCTION_FILE_CHARS: usize = 4_000;
const MAX_TOTAL_INSTRUCTION_CHARS: usize = 12_000; const MAX_TOTAL_INSTRUCTION_CHARS: usize = 12_000;
/// Neutral identity for the model family line in generated prompts.
#[derive(Debug, Clone, Copy, Default, PartialEq, Eq)]
pub enum ModelFamilyIdentity {
#[default]
Claude,
Generic,
}
impl ModelFamilyIdentity {
#[must_use]
pub const fn family_label(self) -> &'static str {
match self {
Self::Claude => FRONTIER_MODEL_NAME,
Self::Generic => "an AI assistant",
}
}
}
/// Contents of an instruction file included in prompt construction. /// Contents of an instruction file included in prompt construction.
#[derive(Debug, Clone, PartialEq, Eq)] #[derive(Debug, Clone, PartialEq, Eq)]
pub struct ContextFile { pub struct ContextFile {
@@ -97,6 +115,7 @@ pub struct SystemPromptBuilder {
output_style_prompt: Option<String>, output_style_prompt: Option<String>,
os_name: Option<String>, os_name: Option<String>,
os_version: Option<String>, os_version: Option<String>,
model_family: Option<ModelFamilyIdentity>,
append_sections: Vec<String>, append_sections: Vec<String>,
project_context: Option<ProjectContext>, project_context: Option<ProjectContext>,
config: Option<RuntimeConfig>, config: Option<RuntimeConfig>,
@@ -122,6 +141,12 @@ impl SystemPromptBuilder {
self self
} }
#[must_use]
pub fn with_model_family(mut self, model_family: ModelFamilyIdentity) -> Self {
self.model_family = Some(model_family);
self
}
#[must_use] #[must_use]
pub fn with_project_context(mut self, project_context: ProjectContext) -> Self { pub fn with_project_context(mut self, project_context: ProjectContext) -> Self {
self.project_context = Some(project_context); self.project_context = Some(project_context);
@@ -179,9 +204,10 @@ impl SystemPromptBuilder {
|| "unknown".to_string(), || "unknown".to_string(),
|context| context.current_date.clone(), |context| context.current_date.clone(),
); );
let identity = self.model_family.unwrap_or_default();
let mut lines = vec!["# Environment context".to_string()]; let mut lines = vec!["# Environment context".to_string()];
lines.extend(prepend_bullets(vec![ lines.extend(prepend_bullets(vec![
format!("Model family: {FRONTIER_MODEL_NAME}"), format!("Model family: {}", identity.family_label()),
format!("Working directory: {cwd}"), format!("Working directory: {cwd}"),
format!("Date: {date}"), format!("Date: {date}"),
format!( format!(
@@ -434,12 +460,14 @@ pub fn load_system_prompt(
current_date: impl Into<String>, current_date: impl Into<String>,
os_name: impl Into<String>, os_name: impl Into<String>,
os_version: impl Into<String>, os_version: impl Into<String>,
model_family: ModelFamilyIdentity,
) -> Result<Vec<String>, PromptBuildError> { ) -> Result<Vec<String>, PromptBuildError> {
let cwd = cwd.into(); let cwd = cwd.into();
let project_context = ProjectContext::discover_with_git(&cwd, current_date.into())?; let project_context = ProjectContext::discover_with_git(&cwd, current_date.into())?;
let config = ConfigLoader::default_for(&cwd).load()?; let config = ConfigLoader::default_for(&cwd).load()?;
Ok(SystemPromptBuilder::new() Ok(SystemPromptBuilder::new()
.with_os(os_name, os_version) .with_os(os_name, os_version)
.with_model_family(model_family)
.with_project_context(project_context) .with_project_context(project_context)
.with_runtime_config(config) .with_runtime_config(config)
.build()) .build())
@@ -522,7 +550,8 @@ mod tests {
use super::{ use super::{
collapse_blank_lines, display_context_path, normalize_instruction_content, collapse_blank_lines, display_context_path, normalize_instruction_content,
render_instruction_content, render_instruction_files, truncate_instruction_content, render_instruction_content, render_instruction_files, truncate_instruction_content,
ContextFile, ProjectContext, SystemPromptBuilder, SYSTEM_PROMPT_DYNAMIC_BOUNDARY, ContextFile, ModelFamilyIdentity, ProjectContext, SystemPromptBuilder,
SYSTEM_PROMPT_DYNAMIC_BOUNDARY,
}; };
use crate::config::ConfigLoader; use crate::config::ConfigLoader;
use std::fs; use std::fs;
@@ -804,13 +833,19 @@ mod tests {
std::env::set_var("HOME", &root); std::env::set_var("HOME", &root);
std::env::set_var("CLAW_CONFIG_HOME", root.join("missing-home")); std::env::set_var("CLAW_CONFIG_HOME", root.join("missing-home"));
std::env::set_current_dir(&root).expect("change cwd"); std::env::set_current_dir(&root).expect("change cwd");
let prompt = super::load_system_prompt(&root, "2026-03-31", "linux", "6.8") let prompt = super::load_system_prompt(
.expect("system prompt should load") &root,
.join( "2026-03-31",
" "linux",
"6.8",
ModelFamilyIdentity::Claude,
)
.expect("system prompt should load")
.join(
"
", ",
); );
std::env::set_current_dir(previous).expect("restore cwd"); std::env::set_current_dir(previous).expect("restore cwd");
if let Some(value) = original_home { if let Some(value) = original_home {
std::env::set_var("HOME", value); std::env::set_var("HOME", value);
@@ -828,6 +863,50 @@ mod tests {
fs::remove_dir_all(root).expect("cleanup temp dir"); fs::remove_dir_all(root).expect("cleanup temp dir");
} }
#[test]
fn renders_default_claude_model_family_identity() {
// given: a prompt builder without an explicit model family override
let project_context = ProjectContext {
cwd: PathBuf::from("/tmp/project"),
current_date: "2026-03-31".to_string(),
..ProjectContext::default()
};
// when: rendering the system prompt environment section
let prompt = SystemPromptBuilder::new()
.with_os("linux", "6.8")
.with_project_context(project_context)
.render();
// then: the Claude model family label is preserved by default
assert!(prompt.contains("Model family: Claude Opus 4.6"));
}
#[test]
fn renders_generic_model_family_identity_without_claude_label() {
// given: a prompt builder with generic model family identity
let project_context = ProjectContext {
cwd: PathBuf::from("/tmp/project"),
current_date: "2026-03-31".to_string(),
..ProjectContext::default()
};
// when: rendering the system prompt environment section
let prompt = SystemPromptBuilder::new()
.with_os("linux", "6.8")
.with_model_family(ModelFamilyIdentity::Generic)
.with_project_context(project_context)
.render();
let model_family_line = prompt
.lines()
.find(|line| line.contains("Model family:"))
.expect("model family line should render");
// then: the model family line is neutral and excludes Claude Opus 4.6
assert_eq!(model_family_line, " - Model family: an AI assistant");
assert!(!model_family_line.contains("Claude Opus 4.6"));
}
#[test] #[test]
fn renders_claude_code_style_sections_with_project_context() { fn renders_claude_code_style_sections_with_project_context() {
let root = temp_dir(); let root = temp_dir();

View File

@@ -0,0 +1,552 @@
use std::collections::{BTreeMap, BTreeSet};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use sha2::{Digest, Sha256};
pub const REPORT_SCHEMA_V1: &str = "claw.report.v1";
pub const DEFAULT_PROJECTION_POLICY_V1: &str = "claw.report.projection.v1";
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum ClaimKind {
ObservedFact,
Inference,
Hypothesis,
Recommendation,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum ReportConfidence {
High,
Medium,
Low,
Unknown,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum SensitivityClass {
Public,
Internal,
OperatorOnly,
Secret,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum FieldDeltaState {
Changed,
Unchanged,
Cleared,
CarriedForward,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum NegativeFindingStatus {
NotObservedInCheckedScope,
UnknownNotChecked,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct ReportClaim {
pub id: String,
pub kind: ClaimKind,
pub text: String,
pub confidence: ReportConfidence,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub evidence: Vec<String>,
pub sensitivity: SensitivityClass,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct NegativeEvidence {
pub id: String,
pub status: NegativeFindingStatus,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub checked_surfaces: Vec<String>,
pub query: String,
pub window: String,
pub sensitivity: SensitivityClass,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct FieldDelta {
pub field: String,
pub state: FieldDeltaState,
#[serde(skip_serializing_if = "Option::is_none")]
pub previous_hash: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub current_hash: Option<String>,
pub attribution: String,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct ReportIdentity {
pub report_id: String,
pub content_hash: String,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct CanonicalReportV1 {
pub schema_version: String,
pub identity: ReportIdentity,
pub generated_at: String,
pub producer: String,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub claims: Vec<ReportClaim>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub negative_evidence: Vec<NegativeEvidence>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub field_deltas: Vec<FieldDelta>,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct ConsumerCapabilities {
pub consumer: String,
#[serde(default, skip_serializing_if = "BTreeSet::is_empty")]
pub schema_versions: BTreeSet<String>,
#[serde(default, skip_serializing_if = "BTreeSet::is_empty")]
pub field_families: BTreeSet<String>,
pub max_sensitivity: SensitivityClass,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct RedactionProvenance {
pub field_path: String,
pub reason: String,
pub policy_id: String,
pub original_hash: String,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct ProjectionProvenance {
pub policy_id: String,
pub source_schema_version: String,
pub source_report_id: String,
pub source_content_hash: String,
pub consumer: String,
pub downgraded: bool,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub omitted_field_families: Vec<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub redactions: Vec<RedactionProvenance>,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct ReportProjectionV1 {
pub schema_version: String,
pub projection_id: String,
pub view: String,
pub provenance: ProjectionProvenance,
pub payload: Value,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct ReportSchemaField {
pub id: String,
pub description: String,
pub required: bool,
pub field_family: String,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct ReportSchemaRegistry {
pub schema_version: String,
pub compatibility: String,
pub fields: Vec<ReportSchemaField>,
}
#[must_use]
pub fn report_schema_v1_registry() -> ReportSchemaRegistry {
ReportSchemaRegistry {
schema_version: REPORT_SCHEMA_V1.to_string(),
compatibility: "additive fields are compatible; missing required fields are breaking"
.to_string(),
fields: vec![
field(
"identity.report_id",
"stable canonical report identity",
true,
"identity",
),
field(
"identity.content_hash",
"hash of canonical payload excluding identity",
true,
"identity",
),
field(
"claims[].kind",
"fact/inference/hypothesis/recommendation label",
true,
"claims",
),
field(
"claims[].confidence",
"confidence bucket for the claim",
true,
"claims",
),
field(
"claims[].evidence",
"evidence ids supporting a claim",
false,
"claims",
),
field(
"negative_evidence[]",
"searched-and-not-found findings with checked scope",
false,
"negative_evidence",
),
field(
"field_deltas[]",
"field-level changed/unchanged/cleared/carried-forward attribution",
false,
"field_deltas",
),
field(
"projection.provenance.redactions[]",
"redaction policy provenance for projected fields",
false,
"projection",
),
],
}
}
#[must_use]
pub fn canonicalize_report(mut report: CanonicalReportV1) -> CanonicalReportV1 {
report.schema_version = REPORT_SCHEMA_V1.to_string();
report.claims.sort_by(|a, b| a.id.cmp(&b.id));
report.negative_evidence.sort_by(|a, b| a.id.cmp(&b.id));
report.field_deltas.sort_by(|a, b| a.field.cmp(&b.field));
let content_hash = report_content_hash(&report);
if report.identity.report_id.is_empty() {
report.identity.report_id = format!("report-{content_hash}");
}
report.identity.content_hash = content_hash;
report
}
#[must_use]
pub fn report_content_hash(report: &CanonicalReportV1) -> String {
let mut hashable = report.clone();
hashable.identity.report_id.clear();
hashable.identity.content_hash.clear();
stable_json_hash(&serde_json::to_value(hashable).expect("report should serialize"))
}
#[must_use]
pub fn project_report(
report: &CanonicalReportV1,
capabilities: &ConsumerCapabilities,
view: impl Into<String>,
) -> ReportProjectionV1 {
let view = view.into();
let supports_schema = capabilities.schema_versions.contains(REPORT_SCHEMA_V1);
let mut omitted_field_families = Vec::new();
let mut redactions = Vec::new();
let mut payload = serde_json::Map::new();
payload.insert(
"identity".to_string(),
serde_json::to_value(&report.identity).expect("identity serializes"),
);
payload.insert(
"generated_at".to_string(),
Value::String(report.generated_at.clone()),
);
payload.insert(
"producer".to_string(),
Value::String(report.producer.clone()),
);
if supports_family(capabilities, "claims") {
let claims = report
.claims
.iter()
.enumerate()
.filter_map(|(index, claim)| redact_claim(index, claim, capabilities, &mut redactions))
.collect::<Vec<_>>();
payload.insert("claims".to_string(), Value::Array(claims));
} else {
omitted_field_families.push("claims".to_string());
}
if supports_family(capabilities, "negative_evidence") {
payload.insert(
"negative_evidence".to_string(),
serde_json::to_value(&report.negative_evidence).expect("negative evidence serializes"),
);
} else {
omitted_field_families.push("negative_evidence".to_string());
}
if supports_family(capabilities, "field_deltas") {
payload.insert(
"field_deltas".to_string(),
serde_json::to_value(&report.field_deltas).expect("field deltas serialize"),
);
} else {
omitted_field_families.push("field_deltas".to_string());
}
let downgraded =
!supports_schema || !omitted_field_families.is_empty() || !redactions.is_empty();
let provenance = ProjectionProvenance {
policy_id: DEFAULT_PROJECTION_POLICY_V1.to_string(),
source_schema_version: report.schema_version.clone(),
source_report_id: report.identity.report_id.clone(),
source_content_hash: report.identity.content_hash.clone(),
consumer: capabilities.consumer.clone(),
downgraded,
omitted_field_families,
redactions,
};
let mut projection = ReportProjectionV1 {
schema_version: REPORT_SCHEMA_V1.to_string(),
projection_id: String::new(),
view,
provenance,
payload: Value::Object(payload),
};
projection.projection_id = stable_json_hash(&serde_json::json!({
"view": projection.view,
"provenance": projection.provenance,
"payload": projection.payload,
}));
projection
}
fn field(id: &str, description: &str, required: bool, field_family: &str) -> ReportSchemaField {
ReportSchemaField {
id: id.to_string(),
description: description.to_string(),
required,
field_family: field_family.to_string(),
}
}
fn supports_family(capabilities: &ConsumerCapabilities, family: &str) -> bool {
capabilities.field_families.is_empty() || capabilities.field_families.contains(family)
}
fn redact_claim(
index: usize,
claim: &ReportClaim,
capabilities: &ConsumerCapabilities,
redactions: &mut Vec<RedactionProvenance>,
) -> Option<Value> {
if claim.sensitivity <= capabilities.max_sensitivity {
return Some(serde_json::to_value(claim).expect("claim serializes"));
}
if claim.sensitivity == SensitivityClass::Secret {
redactions.push(RedactionProvenance {
field_path: format!("claims[{index}]"),
reason: "omitted: sensitivity exceeds consumer policy".to_string(),
policy_id: DEFAULT_PROJECTION_POLICY_V1.to_string(),
original_hash: stable_json_hash(
&serde_json::to_value(claim).expect("claim serializes"),
),
});
return None;
}
let mut redacted = claim.clone();
let original_hash = stable_json_hash(&serde_json::to_value(claim).expect("claim serializes"));
redacted.text = "<redacted>".to_string();
redacted.evidence.clear();
redactions.push(RedactionProvenance {
field_path: format!("claims[{index}].text"),
reason: "transformed: sensitivity exceeds consumer policy".to_string(),
policy_id: DEFAULT_PROJECTION_POLICY_V1.to_string(),
original_hash,
});
Some(serde_json::to_value(redacted).expect("redacted claim serializes"))
}
fn stable_json_hash(value: &Value) -> String {
let normalized = normalize_json(value);
let bytes = serde_json::to_vec(&normalized).expect("normalized json should serialize");
let digest = Sha256::digest(bytes);
let mut hash = String::with_capacity(16);
for byte in &digest[..8] {
use std::fmt::Write as _;
write!(&mut hash, "{byte:02x}").expect("writing to String should not fail");
}
hash
}
fn normalize_json(value: &Value) -> Value {
match value {
Value::Array(values) => Value::Array(values.iter().map(normalize_json).collect()),
Value::Object(map) => {
let sorted = map
.iter()
.map(|(key, value)| (key.clone(), normalize_json(value)))
.collect::<BTreeMap<_, _>>();
serde_json::to_value(sorted).expect("sorted map should serialize")
}
other => other.clone(),
}
}
#[cfg(test)]
mod tests {
use super::{
canonicalize_report, project_report, report_schema_v1_registry, CanonicalReportV1,
ClaimKind, ConsumerCapabilities, FieldDelta, FieldDeltaState, NegativeEvidence,
NegativeFindingStatus, ReportClaim, ReportConfidence, ReportIdentity, SensitivityClass,
REPORT_SCHEMA_V1,
};
fn fixture_report() -> CanonicalReportV1 {
canonicalize_report(CanonicalReportV1 {
schema_version: String::new(),
identity: ReportIdentity {
report_id: String::new(),
content_hash: String::new(),
},
generated_at: "2026-05-14T00:00:00Z".to_string(),
producer: "worker-1".to_string(),
claims: vec![
ReportClaim {
id: "claim-secret".to_string(),
kind: ClaimKind::ObservedFact,
text: "secret token appeared in logs".to_string(),
confidence: ReportConfidence::High,
evidence: vec!["log:secret".to_string()],
sensitivity: SensitivityClass::Secret,
},
ReportClaim {
id: "claim-hypothesis".to_string(),
kind: ClaimKind::Hypothesis,
text: "transport restart likely caused the retry".to_string(),
confidence: ReportConfidence::Medium,
evidence: vec!["event:transport".to_string()],
sensitivity: SensitivityClass::Internal,
},
ReportClaim {
id: "claim-fact".to_string(),
kind: ClaimKind::ObservedFact,
text: "lane finished once".to_string(),
confidence: ReportConfidence::High,
evidence: vec!["event:lane.finished".to_string()],
sensitivity: SensitivityClass::Public,
},
],
negative_evidence: vec![NegativeEvidence {
id: "neg-blocker".to_string(),
status: NegativeFindingStatus::NotObservedInCheckedScope,
checked_surfaces: vec!["lane_events".to_string(), "worker_status".to_string()],
query: "current blocker".to_string(),
window: "2026-05-14T00:00:00Z/2026-05-14T00:05:00Z".to_string(),
sensitivity: SensitivityClass::Public,
}],
field_deltas: vec![FieldDelta {
field: "blocker".to_string(),
state: FieldDeltaState::Cleared,
previous_hash: Some("prev123".to_string()),
current_hash: None,
attribution: "lane.failed reconciled to lane.finished".to_string(),
}],
})
}
fn capabilities(families: &[&str], max_sensitivity: SensitivityClass) -> ConsumerCapabilities {
ConsumerCapabilities {
consumer: "clawhip".to_string(),
schema_versions: [REPORT_SCHEMA_V1.to_string()].into_iter().collect(),
field_families: families
.iter()
.map(|family| (*family).to_string())
.collect(),
max_sensitivity,
}
}
#[test]
fn report_schema_registry_is_self_describing() {
let registry = report_schema_v1_registry();
assert_eq!(registry.schema_version, REPORT_SCHEMA_V1);
assert!(registry
.fields
.iter()
.any(|field| field.id == "claims[].kind"));
assert!(registry
.fields
.iter()
.any(|field| field.id == "negative_evidence[]"));
assert!(registry
.fields
.iter()
.any(|field| field.id == "projection.provenance.redactions[]"));
}
#[test]
fn canonical_report_labels_claims_negative_evidence_and_deltas() {
let report = fixture_report();
assert_eq!(report.schema_version, REPORT_SCHEMA_V1);
assert!(report.identity.report_id.starts_with("report-"));
assert_eq!(report.identity.content_hash.len(), 16);
assert_eq!(report.claims[0].id, "claim-fact");
assert_eq!(report.claims[1].kind, ClaimKind::Hypothesis);
assert_eq!(report.claims[1].confidence, ReportConfidence::Medium);
assert_eq!(
report.negative_evidence[0].status,
NegativeFindingStatus::NotObservedInCheckedScope
);
assert_eq!(report.field_deltas[0].state, FieldDeltaState::Cleared);
}
#[test]
fn projections_are_deterministic_and_record_redaction_provenance() {
let report = fixture_report();
let capabilities = capabilities(
&["claims", "negative_evidence", "field_deltas"],
SensitivityClass::Public,
);
let first = project_report(&report, &capabilities, "delta_brief");
let second = project_report(&report, &capabilities, "delta_brief");
assert_eq!(first, second);
assert_eq!(first.provenance.source_report_id, report.identity.report_id);
assert_eq!(
first.provenance.source_content_hash,
report.identity.content_hash
);
assert!(first.provenance.downgraded);
assert_eq!(first.provenance.redactions.len(), 2);
assert!(first
.provenance
.redactions
.iter()
.any(|redaction| redaction.field_path == "claims[1].text"));
assert!(first
.provenance
.redactions
.iter()
.any(|redaction| redaction.field_path == "claims[2]"));
}
#[test]
fn capability_negotiation_omits_unsupported_field_families() {
let report = fixture_report();
let capabilities = capabilities(&["claims"], SensitivityClass::Internal);
let projection = project_report(&report, &capabilities, "legacy_clawhip");
assert!(projection.provenance.downgraded);
assert_eq!(
projection.provenance.omitted_field_families,
vec!["negative_evidence".to_string(), "field_deltas".to_string()]
);
assert!(projection.payload.get("claims").is_some());
assert!(projection.payload.get("negative_evidence").is_none());
assert!(projection.payload.get("field_deltas").is_none());
}
}

View File

@@ -298,8 +298,7 @@ fn unshare_user_namespace_works() -> bool {
.stdout(std::process::Stdio::null()) .stdout(std::process::Stdio::null())
.stderr(std::process::Stdio::null()) .stderr(std::process::Stdio::null())
.status() .status()
.map(|s| s.success()) .is_ok_and(|status| status.success())
.unwrap_or(false)
}) })
} }

View File

@@ -30,6 +30,10 @@ pub enum ContentBlock {
Text { Text {
text: String, text: String,
}, },
Thinking {
thinking: String,
signature: Option<String>,
},
ToolUse { ToolUse {
id: String, id: String,
name: String, name: String,
@@ -737,6 +741,22 @@ impl ContentBlock {
object.insert("type".to_string(), JsonValue::String("text".to_string())); object.insert("type".to_string(), JsonValue::String("text".to_string()));
object.insert("text".to_string(), JsonValue::String(text.clone())); object.insert("text".to_string(), JsonValue::String(text.clone()));
} }
Self::Thinking {
thinking,
signature,
} => {
object.insert(
"type".to_string(),
JsonValue::String("thinking".to_string()),
);
object.insert("thinking".to_string(), JsonValue::String(thinking.clone()));
if let Some(signature) = signature {
object.insert(
"signature".to_string(),
JsonValue::String(signature.clone()),
);
}
}
Self::ToolUse { id, name, input } => { Self::ToolUse { id, name, input } => {
object.insert( object.insert(
"type".to_string(), "type".to_string(),
@@ -783,6 +803,13 @@ impl ContentBlock {
"text" => Ok(Self::Text { "text" => Ok(Self::Text {
text: required_string(object, "text")?, text: required_string(object, "text")?,
}), }),
"thinking" => Ok(Self::Thinking {
thinking: required_string(object, "thinking")?,
signature: object
.get("signature")
.and_then(JsonValue::as_str)
.map(String::from),
}),
"tool_use" => Ok(Self::ToolUse { "tool_use" => Ok(Self::ToolUse {
id: required_string(object, "id")?, id: required_string(object, "id")?,
name: required_string(object, "name")?, name: required_string(object, "name")?,
@@ -1208,6 +1235,36 @@ mod tests {
assert_eq!(restored.session_id, session.session_id); assert_eq!(restored.session_id, session.session_id);
} }
#[test]
fn persists_assistant_thinking_block_round_trip_through_jsonl() {
// given
let mut session = Session::new();
session
.push_message(ConversationMessage::assistant(vec![
ContentBlock::Thinking {
thinking: "trace the path through session persistence".to_string(),
signature: Some("sig-123".to_string()),
},
]))
.expect("thinking block should append");
let path = temp_session_path("thinking-jsonl");
// when
session.save_to_path(&path).expect("session should save");
let restored = Session::load_from_path(&path).expect("session should load");
fs::remove_file(&path).expect("temp file should be removable");
// then
assert_eq!(restored, session);
assert_eq!(
restored.messages[0].blocks[0],
ContentBlock::Thinking {
thinking: "trace the path through session persistence".to_string(),
signature: Some("sig-123".to_string()),
}
);
}
#[test] #[test]
fn loads_legacy_session_json_object() { fn loads_legacy_session_json_object() {
let path = temp_session_path("legacy"); let path = temp_session_path("legacy");

View File

@@ -122,13 +122,37 @@ pub enum StartupFailureClassification {
Unknown, Unknown,
} }
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct StartupHealthSummary {
/// Whether this subsystem appeared healthy at timeout.
pub healthy: bool,
/// Stable placeholder/source string until deeper transport and MCP probes are wired in.
pub summary: String,
}
impl StartupHealthSummary {
fn observed(name: &str, healthy: bool) -> Self {
let status = if healthy { "healthy" } else { "unhealthy" };
Self {
healthy,
summary: format!("{name}_{status}_placeholder"),
}
}
}
/// Evidence bundle collected when worker startup times out without clear evidence. /// Evidence bundle collected when worker startup times out without clear evidence.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] #[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct StartupEvidenceBundle { pub struct StartupEvidenceBundle {
/// Last known worker lifecycle state before timeout /// Last known worker lifecycle state before timeout
pub last_lifecycle_state: WorkerStatus, pub last_lifecycle_state: WorkerStatus,
/// Timestamp of the last lifecycle state transition, unix epoch seconds
pub last_lifecycle_at: u64,
/// The pane/command that was being executed /// The pane/command that was being executed
pub pane_command: String, pub pane_command: String,
/// Timestamp when the pane/command snapshot was observed, unix epoch seconds
pub pane_observed_at: u64,
/// Timestamp when the worker command was started, unix epoch seconds
pub command_started_at: u64,
/// Timestamp when prompt was sent (if any), unix epoch seconds /// Timestamp when prompt was sent (if any), unix epoch seconds
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
pub prompt_sent_at: Option<u64>, pub prompt_sent_at: Option<u64>,
@@ -146,8 +170,12 @@ pub struct StartupEvidenceBundle {
pub tool_permission_allow_scope: Option<ToolPermissionAllowScope>, pub tool_permission_allow_scope: Option<ToolPermissionAllowScope>,
/// Transport health summary (true = healthy/responsive) /// Transport health summary (true = healthy/responsive)
pub transport_healthy: bool, pub transport_healthy: bool,
/// Typed transport health placeholder for future concrete probes
pub transport_health: StartupHealthSummary,
/// MCP health summary (true = all servers healthy) /// MCP health summary (true = all servers healthy)
pub mcp_healthy: bool, pub mcp_healthy: bool,
/// Typed MCP health placeholder for future concrete probes
pub mcp_health: StartupHealthSummary,
/// Seconds since worker creation /// Seconds since worker creation
pub elapsed_seconds: u64, pub elapsed_seconds: u64,
} }
@@ -225,6 +253,7 @@ pub struct Worker {
pub auto_recover_prompt_misdelivery: bool, pub auto_recover_prompt_misdelivery: bool,
pub prompt_delivery_attempts: u32, pub prompt_delivery_attempts: u32,
pub prompt_in_flight: bool, pub prompt_in_flight: bool,
pub prompt_sent_at: Option<u64>,
pub last_prompt: Option<String>, pub last_prompt: Option<String>,
pub expected_receipt: Option<WorkerTaskReceipt>, pub expected_receipt: Option<WorkerTaskReceipt>,
pub replay_prompt: Option<String>, pub replay_prompt: Option<String>,
@@ -274,6 +303,7 @@ impl WorkerRegistry {
auto_recover_prompt_misdelivery, auto_recover_prompt_misdelivery,
prompt_delivery_attempts: 0, prompt_delivery_attempts: 0,
prompt_in_flight: false, prompt_in_flight: false,
prompt_sent_at: None,
last_prompt: None, last_prompt: None,
expected_receipt: None, expected_receipt: None,
replay_prompt: None, replay_prompt: None,
@@ -528,6 +558,7 @@ impl WorkerRegistry {
worker.prompt_delivery_attempts += 1; worker.prompt_delivery_attempts += 1;
worker.prompt_in_flight = true; worker.prompt_in_flight = true;
worker.prompt_sent_at = Some(now_secs());
worker.last_prompt = Some(next_prompt.clone()); worker.last_prompt = Some(next_prompt.clone());
worker.expected_receipt = task_receipt; worker.expected_receipt = task_receipt;
worker.replay_prompt = None; worker.replay_prompt = None;
@@ -579,6 +610,7 @@ impl WorkerRegistry {
worker.last_error = None; worker.last_error = None;
worker.prompt_delivery_attempts = 0; worker.prompt_delivery_attempts = 0;
worker.prompt_in_flight = false; worker.prompt_in_flight = false;
worker.prompt_sent_at = None;
push_event( push_event(
worker, worker,
WorkerEventKind::Restarted, WorkerEventKind::Restarted,
@@ -696,12 +728,11 @@ impl WorkerRegistry {
// Build evidence bundle // Build evidence bundle
let evidence = StartupEvidenceBundle { let evidence = StartupEvidenceBundle {
last_lifecycle_state: worker.status, last_lifecycle_state: worker.status,
last_lifecycle_at: worker.updated_at,
pane_command: pane_command.to_string(), pane_command: pane_command.to_string(),
prompt_sent_at: if worker.prompt_delivery_attempts > 0 { pane_observed_at: now,
Some(worker.updated_at) command_started_at: worker.created_at,
} else { prompt_sent_at: worker.prompt_sent_at,
None
},
prompt_acceptance_state: worker.status == WorkerStatus::Running prompt_acceptance_state: worker.status == WorkerStatus::Running
&& !worker.prompt_in_flight, && !worker.prompt_in_flight,
trust_prompt_detected: worker trust_prompt_detected: worker
@@ -716,7 +747,9 @@ impl WorkerRegistry {
.map(|event| now.saturating_sub(event.timestamp)), .map(|event| now.saturating_sub(event.timestamp)),
tool_permission_allow_scope, tool_permission_allow_scope,
transport_healthy, transport_healthy,
transport_health: StartupHealthSummary::observed("transport", transport_healthy),
mcp_healthy, mcp_healthy,
mcp_health: StartupHealthSummary::observed("mcp", mcp_healthy),
elapsed_seconds: elapsed, elapsed_seconds: elapsed,
}; };
@@ -1840,8 +1873,16 @@ mod tests {
"last state should be spawning" "last state should be spawning"
); );
assert_eq!(evidence.pane_command, "cargo test"); assert_eq!(evidence.pane_command, "cargo test");
assert!(evidence.command_started_at <= evidence.pane_observed_at);
assert!(evidence.last_lifecycle_at <= evidence.pane_observed_at);
assert!(!evidence.transport_healthy); assert!(!evidence.transport_healthy);
assert!(!evidence.transport_health.healthy);
assert!(evidence
.transport_health
.summary
.contains("transport_unhealthy"));
assert!(evidence.mcp_healthy); assert!(evidence.mcp_healthy);
assert!(evidence.mcp_health.healthy);
assert_eq!(*classification, StartupFailureClassification::TransportDead); assert_eq!(*classification, StartupFailureClassification::TransportDead);
} }
_ => panic!( _ => panic!(
@@ -1932,11 +1973,53 @@ mod tests {
} }
} }
#[test]
fn startup_timeout_preserves_original_prompt_sent_timestamp() {
let registry = WorkerRegistry::new();
let worker = registry.create("/tmp/repo-prompt-timestamp", &[], true);
registry
.observe(&worker.worker_id, "Ready for input\n>")
.expect("ready observe should succeed");
let prompted = registry
.send_prompt(
&worker.worker_id,
Some("Run timestamp-sensitive work"),
None,
)
.expect("prompt send should succeed");
let sent_at = prompted
.prompt_sent_at
.expect("prompt send should record a prompt timestamp");
let timed_out = registry
.observe_startup_timeout(&worker.worker_id, "claw worker", true, true)
.expect("startup timeout observe should succeed");
let event = timed_out
.events
.iter()
.find(|e| e.kind == WorkerEventKind::StartupNoEvidence)
.expect("startup no evidence event should exist");
match event.payload.as_ref() {
Some(WorkerEventPayload::StartupNoEvidence { evidence, .. }) => {
assert_eq!(evidence.prompt_sent_at, Some(sent_at));
assert!(evidence.last_lifecycle_at <= evidence.pane_observed_at);
assert!(evidence.command_started_at <= sent_at);
}
_ => panic!("expected StartupNoEvidence payload"),
}
}
#[test] #[test]
fn startup_evidence_bundle_serializes_correctly() { fn startup_evidence_bundle_serializes_correctly() {
let bundle = StartupEvidenceBundle { let bundle = StartupEvidenceBundle {
last_lifecycle_state: WorkerStatus::Running, last_lifecycle_state: WorkerStatus::Running,
last_lifecycle_at: 1_234_567_889,
pane_command: "test command".to_string(), pane_command: "test command".to_string(),
pane_observed_at: 1_234_567_891,
command_started_at: 1_234_567_800,
prompt_sent_at: Some(1_234_567_890), prompt_sent_at: Some(1_234_567_890),
prompt_acceptance_state: false, prompt_acceptance_state: false,
trust_prompt_detected: true, trust_prompt_detected: true,
@@ -1944,7 +2027,9 @@ mod tests {
tool_permission_prompt_age_seconds: None, tool_permission_prompt_age_seconds: None,
tool_permission_allow_scope: None, tool_permission_allow_scope: None,
transport_healthy: true, transport_healthy: true,
transport_health: StartupHealthSummary::observed("transport", true),
mcp_healthy: false, mcp_healthy: false,
mcp_health: StartupHealthSummary::observed("mcp", false),
elapsed_seconds: 60, elapsed_seconds: 60,
}; };
@@ -1953,8 +2038,13 @@ mod tests {
assert!(json.contains("\"pane_command\"")); assert!(json.contains("\"pane_command\""));
assert!(json.contains("\"prompt_sent_at\":1234567890")); assert!(json.contains("\"prompt_sent_at\":1234567890"));
assert!(json.contains("\"trust_prompt_detected\":true")); assert!(json.contains("\"trust_prompt_detected\":true"));
assert!(json.contains("\"last_lifecycle_at\":1234567889"));
assert!(json.contains("\"pane_observed_at\":1234567891"));
assert!(json.contains("\"command_started_at\":1234567800"));
assert!(json.contains("\"transport_healthy\":true")); assert!(json.contains("\"transport_healthy\":true"));
assert!(json.contains("\"transport_health\""));
assert!(json.contains("\"mcp_healthy\":false")); assert!(json.contains("\"mcp_healthy\":false"));
assert!(json.contains("\"mcp_health\""));
let deserialized: StartupEvidenceBundle = let deserialized: StartupEvidenceBundle =
serde_json::from_str(&json).expect("should deserialize"); serde_json::from_str(&json).expect("should deserialize");
@@ -1966,7 +2056,10 @@ mod tests {
fn classify_startup_failure_detects_transport_dead() { fn classify_startup_failure_detects_transport_dead() {
let evidence = StartupEvidenceBundle { let evidence = StartupEvidenceBundle {
last_lifecycle_state: WorkerStatus::Spawning, last_lifecycle_state: WorkerStatus::Spawning,
last_lifecycle_at: 10,
pane_command: "test".to_string(), pane_command: "test".to_string(),
pane_observed_at: 40,
command_started_at: 1,
prompt_sent_at: None, prompt_sent_at: None,
prompt_acceptance_state: false, prompt_acceptance_state: false,
trust_prompt_detected: false, trust_prompt_detected: false,
@@ -1974,7 +2067,9 @@ mod tests {
tool_permission_prompt_age_seconds: None, tool_permission_prompt_age_seconds: None,
tool_permission_allow_scope: None, tool_permission_allow_scope: None,
transport_healthy: false, transport_healthy: false,
transport_health: StartupHealthSummary::observed("transport", false),
mcp_healthy: true, mcp_healthy: true,
mcp_health: StartupHealthSummary::observed("mcp", true),
elapsed_seconds: 30, elapsed_seconds: 30,
}; };
@@ -1986,7 +2081,10 @@ mod tests {
fn classify_startup_failure_defaults_to_unknown() { fn classify_startup_failure_defaults_to_unknown() {
let evidence = StartupEvidenceBundle { let evidence = StartupEvidenceBundle {
last_lifecycle_state: WorkerStatus::Spawning, last_lifecycle_state: WorkerStatus::Spawning,
last_lifecycle_at: 10,
pane_command: "test".to_string(), pane_command: "test".to_string(),
pane_observed_at: 40,
command_started_at: 1,
prompt_sent_at: None, prompt_sent_at: None,
prompt_acceptance_state: false, prompt_acceptance_state: false,
trust_prompt_detected: false, trust_prompt_detected: false,
@@ -1994,7 +2092,9 @@ mod tests {
tool_permission_prompt_age_seconds: None, tool_permission_prompt_age_seconds: None,
tool_permission_allow_scope: None, tool_permission_allow_scope: None,
transport_healthy: true, transport_healthy: true,
transport_health: StartupHealthSummary::observed("transport", true),
mcp_healthy: true, mcp_healthy: true,
mcp_health: StartupHealthSummary::observed("mcp", true),
elapsed_seconds: 10, elapsed_seconds: 10,
}; };
@@ -2002,13 +2102,44 @@ mod tests {
assert_eq!(classification, StartupFailureClassification::Unknown); assert_eq!(classification, StartupFailureClassification::Unknown);
} }
#[test]
fn classify_startup_failure_detects_prompt_misdelivery_after_timeout() {
let evidence = StartupEvidenceBundle {
last_lifecycle_state: WorkerStatus::ReadyForPrompt,
last_lifecycle_at: 10,
pane_command: "test".to_string(),
pane_observed_at: 45,
command_started_at: 1,
prompt_sent_at: Some(10),
prompt_acceptance_state: false,
trust_prompt_detected: false,
tool_permission_prompt_detected: false,
tool_permission_prompt_age_seconds: None,
tool_permission_allow_scope: None,
transport_healthy: true,
transport_health: StartupHealthSummary::observed("transport", true),
mcp_healthy: true,
mcp_health: StartupHealthSummary::observed("mcp", true),
elapsed_seconds: 31,
};
let classification = classify_startup_failure(&evidence);
assert_eq!(
classification,
StartupFailureClassification::PromptMisdelivery
);
}
#[test] #[test]
fn classify_startup_failure_detects_worker_crashed() { fn classify_startup_failure_detects_worker_crashed() {
// Worker crashed scenario: transport healthy but MCP unhealthy // Worker crashed scenario: transport healthy but MCP unhealthy
// Don't have prompt in flight (no prompt_sent_at) to avoid matching PromptAcceptanceTimeout // Don't have prompt in flight (no prompt_sent_at) to avoid matching PromptAcceptanceTimeout
let evidence = StartupEvidenceBundle { let evidence = StartupEvidenceBundle {
last_lifecycle_state: WorkerStatus::Spawning, last_lifecycle_state: WorkerStatus::Spawning,
last_lifecycle_at: 10,
pane_command: "test".to_string(), pane_command: "test".to_string(),
pane_observed_at: 40,
command_started_at: 1,
prompt_sent_at: None, // No prompt sent yet prompt_sent_at: None, // No prompt sent yet
prompt_acceptance_state: false, prompt_acceptance_state: false,
trust_prompt_detected: false, trust_prompt_detected: false,
@@ -2016,7 +2147,9 @@ mod tests {
tool_permission_prompt_age_seconds: None, tool_permission_prompt_age_seconds: None,
tool_permission_allow_scope: None, tool_permission_allow_scope: None,
transport_healthy: true, transport_healthy: true,
mcp_healthy: false, // MCP unhealthy but transport healthy suggests crash transport_health: StartupHealthSummary::observed("transport", true),
mcp_healthy: false,
mcp_health: StartupHealthSummary::observed("mcp", false), // MCP unhealthy but transport healthy suggests crash
elapsed_seconds: 45, elapsed_seconds: 45,
}; };

View File

@@ -0,0 +1,81 @@
{
"schemaVersion": "g004.contract.bundle.v1",
"laneEvents": [
{
"event": "lane.started",
"status": "running",
"emittedAt": "2026-05-14T00:00:00Z",
"metadata": {
"seq": 1,
"provenance": "live_lane",
"emitterIdentity": "worker-1",
"environmentLabel": "team-g004"
}
},
{
"event": "lane.finished",
"status": "completed",
"emittedAt": "2026-05-14T00:00:10Z",
"metadata": {
"seq": 2,
"provenance": "live_lane",
"emitterIdentity": "worker-1",
"environmentLabel": "team-g004",
"eventFingerprint": "terminal-fp-001"
}
}
],
"reports": [
{
"schemaVersion": "g004.report.v1",
"reportId": "report-g004-fixture",
"identity": { "contentHash": "sha256:report-content" },
"projection": { "provenance": "runtime.event_projection.v1" },
"redaction": { "provenance": "runtime.redaction_policy.v1" },
"consumerCapabilities": ["facts", "field_deltas", "redaction_provenance"],
"findings": [
{
"kind": "fact",
"confidence": "high",
"statement": "lane event reached terminal state"
},
{
"kind": "hypothesis",
"confidence": "medium",
"statement": "consumer can reconcile the terminal fingerprint"
},
{
"kind": "negative_evidence",
"confidence": "high",
"statement": "no duplicate terminal event appears in this fixture"
}
],
"fieldDeltas": [
{
"field": "/laneEvents/1/status",
"previousHash": "sha256:running",
"currentHash": "sha256:completed",
"attribution": "worker-1 terminal reconciliation"
}
]
}
],
"approvalTokens": [
{
"tokenId": "approval-token-fixture",
"owner": "leader-fixed",
"scope": "g004.contract.bundle.fixture",
"issuedAt": "2026-05-14T00:00:01Z",
"oneTimeUse": true,
"replayPreventionNonce": "nonce-fixture-001",
"delegationChain": [
{
"from": "leader-fixed",
"to": "worker-3",
"action": "validate-g004-contract-fixture",
"at": "2026-05-14T00:00:02Z"
}
]
}
]
}

View File

@@ -0,0 +1,11 @@
# Report schema v1 fixture set
Validated by `cargo test -p runtime report_schema -- --nocapture`.
The in-code fixture in `runtime::report_schema::tests::fixture_report` covers:
- fact / hypothesis / confidence labels
- negative evidence with checked surfaces and query window
- field-level delta attribution
- canonical report id plus content hash
- deterministic projection/redaction provenance
- consumer capability negotiation and downgraded projections

View File

@@ -0,0 +1,80 @@
use runtime::g004_conformance::{is_g004_contract_bundle_valid, validate_g004_contract_bundle};
use serde_json::{json, Value};
fn valid_bundle() -> Value {
serde_json::from_str(include_str!("fixtures/g004_contract_bundle.valid.json"))
.expect("valid fixture JSON should parse")
}
#[test]
fn valid_g004_contract_bundle_fixture_passes_conformance() {
let fixture = valid_bundle();
let errors = validate_g004_contract_bundle(&fixture);
assert!(
errors.is_empty(),
"unexpected conformance errors: {errors:?}"
);
assert!(is_g004_contract_bundle_valid(&fixture));
}
#[test]
fn g004_conformance_reports_machine_readable_paths_for_contract_gaps() {
let invalid = json!({
"schemaVersion": "g004.contract.bundle.v1",
"laneEvents": [
{
"event": "lane.finished",
"status": "completed",
"emittedAt": "2026-05-14T00:00:10Z",
"metadata": {
"seq": 1,
"provenance": "live_lane",
"emitterIdentity": "worker-1",
"environmentLabel": "team-g004"
}
}
],
"reports": [
{
"schemaVersion": "g004.report.v1",
"reportId": "report-with-gaps",
"identity": { "contentHash": "sha256:report-content" },
"projection": { "provenance": "runtime.event_projection.v1" },
"redaction": { "provenance": "runtime.redaction_policy.v1" },
"consumerCapabilities": [],
"findings": [
{
"kind": "guess",
"confidence": "certain",
"statement": "bad labels should be rejected"
}
],
"fieldDeltas": []
}
],
"approvalTokens": [
{
"tokenId": "approval-token-fixture",
"owner": "leader-fixed",
"scope": "g004.contract.bundle.fixture",
"issuedAt": "2026-05-14T00:00:01Z",
"oneTimeUse": false,
"replayPreventionNonce": "nonce-fixture-001",
"delegationChain": []
}
]
});
let errors = validate_g004_contract_bundle(&invalid);
let paths: Vec<&str> = errors.iter().map(|error| error.path.as_str()).collect();
assert!(paths.contains(&"/laneEvents/0/metadata/eventFingerprint"));
assert!(paths.contains(&"/reports/0/consumerCapabilities"));
assert!(paths.contains(&"/reports/0/findings/0/kind"));
assert!(paths.contains(&"/reports/0/findings/0/confidence"));
assert!(paths.contains(&"/reports/0/fieldDeltas"));
assert!(paths.contains(&"/approvalTokens/0/oneTimeUse"));
assert!(paths.contains(&"/approvalTokens/0/delegationChain"));
}

View File

@@ -22,7 +22,7 @@ fn stale_branch_detection_flows_into_policy_engine() {
let stale_context = LaneContext::new( let stale_context = LaneContext::new(
"stale-lane", "stale-lane",
0, 0,
Duration::from_secs(2 * 60 * 60), // 2 hours stale Duration::from_hours(2), // 2 hours stale
LaneBlocker::None, LaneBlocker::None,
ReviewStatus::Pending, ReviewStatus::Pending,
DiffScope::Full, DiffScope::Full,
@@ -49,7 +49,7 @@ fn fresh_branch_does_not_trigger_stale_policy() {
let fresh_context = LaneContext::new( let fresh_context = LaneContext::new(
"fresh-lane", "fresh-lane",
0, 0,
Duration::from_secs(30 * 60), // 30 min stale — under 1 hour threshold Duration::from_mins(30), // 30 min stale — under 1 hour threshold
LaneBlocker::None, LaneBlocker::None,
ReviewStatus::Pending, ReviewStatus::Pending,
DiffScope::Full, DiffScope::Full,
@@ -212,8 +212,8 @@ fn end_to_end_stale_lane_gets_merge_forward_action() {
// when: build context and evaluate policy // when: build context and evaluate policy
let context = LaneContext::new( let context = LaneContext::new(
"lane-9411", "lane-9411",
3, // Workspace green 3, // Workspace green
Duration::from_secs(5 * 60 * 60), // 5 hours stale, definitely over threshold Duration::from_hours(5), // 5 hours stale, definitely over threshold
LaneBlocker::None, LaneBlocker::None,
ReviewStatus::Approved, ReviewStatus::Approved,
DiffScope::Scoped, DiffScope::Scoped,
@@ -261,8 +261,8 @@ fn end_to_end_stale_lane_gets_merge_forward_action() {
fn fresh_approved_lane_gets_merge_action() { fn fresh_approved_lane_gets_merge_action() {
let context = LaneContext::new( let context = LaneContext::new(
"fresh-approved-lane", "fresh-approved-lane",
3, // Workspace green 3, // Workspace green
Duration::from_secs(30 * 60), // 30 min — under 1 hour threshold = fresh Duration::from_mins(30), // 30 min — under 1 hour threshold = fresh
LaneBlocker::None, LaneBlocker::None,
ReviewStatus::Approved, ReviewStatus::Approved,
DiffScope::Scoped, DiffScope::Scoped,
@@ -347,7 +347,7 @@ fn worker_provider_failure_flows_through_recovery_to_policy() {
// (Simulating the policy check that would happen after successful recovery) // (Simulating the policy check that would happen after successful recovery)
let recovery_success = matches!(result, RecoveryResult::Recovered { .. }); let recovery_success = matches!(result, RecoveryResult::Recovered { .. });
let green_level = 3; // Workspace green let green_level = 3; // Workspace green
let not_stale = Duration::from_secs(30 * 60); // 30 min — fresh let not_stale = Duration::from_mins(30); // 30 min — fresh
let post_recovery_context = LaneContext::new( let post_recovery_context = LaneContext::new(
"recovered-lane", "recovered-lane",

View File

@@ -24,10 +24,11 @@ use std::thread::{self, JoinHandle};
use std::time::{Duration, Instant, UNIX_EPOCH}; use std::time::{Duration, Instant, UNIX_EPOCH};
use api::{ use api::{
detect_provider_kind, resolve_startup_auth_source, AnthropicClient, AuthSource, detect_provider_kind, model_family_identity_for, resolve_startup_auth_source, AnthropicClient,
ContentBlockDelta, InputContentBlock, InputMessage, MessageRequest, MessageResponse, AuthSource, ContentBlockDelta, InputContentBlock, InputMessage, MessageRequest,
OutputContentBlock, PromptCache, ProviderClient as ApiProviderClient, ProviderKind, MessageResponse, OutputContentBlock, PromptCache, ProviderClient as ApiProviderClient,
StreamEvent as ApiStreamEvent, ToolChoice, ToolDefinition, ToolResultContentBlock, ProviderKind, StreamEvent as ApiStreamEvent, ToolChoice, ToolDefinition,
ToolResultContentBlock,
}; };
use commands::{ use commands::{
@@ -357,8 +358,9 @@ fn run() -> Result<(), Box<dyn std::error::Error>> {
CliAction::PrintSystemPrompt { CliAction::PrintSystemPrompt {
cwd, cwd,
date, date,
model,
output_format, output_format,
} => print_system_prompt(cwd, date, output_format)?, } => print_system_prompt(cwd, date, &model, output_format)?,
CliAction::Version { output_format } => print_version(output_format)?, CliAction::Version { output_format } => print_version(output_format)?,
CliAction::ResumeSession { CliAction::ResumeSession {
session_path, session_path,
@@ -498,6 +500,7 @@ enum CliAction {
PrintSystemPrompt { PrintSystemPrompt {
cwd: PathBuf, cwd: PathBuf,
date: String, date: String,
model: String,
output_format: CliOutputFormat, output_format: CliOutputFormat,
}, },
Version { Version {
@@ -960,7 +963,7 @@ fn parse_args(args: &[String]) -> Result<CliAction, String> {
}), }),
} }
} }
"system-prompt" => parse_system_prompt_args(&rest[1..], output_format), "system-prompt" => parse_system_prompt_args(&rest[1..], model, output_format),
"acp" => parse_acp_args(&rest[1..], output_format), "acp" => parse_acp_args(&rest[1..], output_format),
"login" | "logout" => Err(removed_auth_surface_error(rest[0].as_str())), "login" | "logout" => Err(removed_auth_surface_error(rest[0].as_str())),
"init" => Ok(CliAction::Init { output_format }), "init" => Ok(CliAction::Init { output_format }),
@@ -1638,6 +1641,7 @@ fn filter_tool_specs(
fn parse_system_prompt_args( fn parse_system_prompt_args(
args: &[String], args: &[String],
model: String,
output_format: CliOutputFormat, output_format: CliOutputFormat,
) -> Result<CliAction, String> { ) -> Result<CliAction, String> {
let mut cwd = env::current_dir().map_err(|error| error.to_string())?; let mut cwd = env::current_dir().map_err(|error| error.to_string())?;
@@ -1674,6 +1678,7 @@ fn parse_system_prompt_args(
Ok(CliAction::PrintSystemPrompt { Ok(CliAction::PrintSystemPrompt {
cwd, cwd,
date, date,
model,
output_format, output_format,
}) })
} }
@@ -1967,8 +1972,16 @@ fn render_doctor_report() -> Result<DoctorReport, Box<dyn std::error::Error>> {
let (project_root, git_branch) = let (project_root, git_branch) =
parse_git_status_metadata(project_context.git_status.as_deref()); parse_git_status_metadata(project_context.git_status.as_deref());
let git_summary = parse_git_workspace_summary(project_context.git_status.as_deref()); let git_summary = parse_git_workspace_summary(project_context.git_status.as_deref());
let branch_freshness = BranchFreshness::from_git_status(project_context.git_status.as_deref());
let empty_config = runtime::RuntimeConfig::empty(); let empty_config = runtime::RuntimeConfig::empty();
let sandbox_config = config.as_ref().ok().unwrap_or(&empty_config); let sandbox_config = config.as_ref().ok().unwrap_or(&empty_config);
let boot_preflight = build_boot_preflight_snapshot(
&cwd,
project_root.as_deref(),
project_context.git_status.as_deref(),
config.as_ref().ok(),
config.as_ref().err().map(ToString::to_string).as_deref(),
);
let context = StatusContext { let context = StatusContext {
cwd: cwd.clone(), cwd: cwd.clone(),
session_path: None, session_path: None,
@@ -1981,7 +1994,9 @@ fn render_doctor_report() -> Result<DoctorReport, Box<dyn std::error::Error>> {
project_root, project_root,
git_branch, git_branch,
git_summary, git_summary,
branch_freshness,
session_lifecycle: classify_session_lifecycle_for(&cwd), session_lifecycle: classify_session_lifecycle_for(&cwd),
boot_preflight,
sandbox_status: resolve_sandbox_status(sandbox_config.sandbox(), &cwd), sandbox_status: resolve_sandbox_status(sandbox_config.sandbox(), &cwd),
// Doctor path has its own config check; StatusContext here is only // Doctor path has its own config check; StatusContext here is only
// fed into health renderers that don't read config_load_error. // fed into health renderers that don't read config_load_error.
@@ -1993,6 +2008,7 @@ fn render_doctor_report() -> Result<DoctorReport, Box<dyn std::error::Error>> {
check_config_health(&config_loader, config.as_ref()), check_config_health(&config_loader, config.as_ref()),
check_install_source_health(), check_install_source_health(),
check_workspace_health(&context), check_workspace_health(&context),
check_boot_preflight_health(&context),
check_sandbox_health(&context.sandbox_status), check_sandbox_health(&context.sandbox_status),
check_system_health(&cwd, config.as_ref().ok()), check_system_health(&cwd, config.as_ref().ok()),
], ],
@@ -2388,6 +2404,73 @@ fn check_workspace_health(context: &StatusContext) -> DiagnosticCheck {
])) ]))
} }
fn check_boot_preflight_health(context: &StatusContext) -> DiagnosticCheck {
let preflight = &context.boot_preflight;
let missing_binaries = preflight
.required_binaries
.iter()
.filter(|binary| !binary.available)
.map(|binary| binary.name)
.collect::<Vec<_>>();
let socket_details = preflight
.control_sockets
.iter()
.map(|socket| {
format!(
"Control socket {} configured={} exists={} path={}",
socket.name,
socket.configured,
socket.exists,
socket.path.as_deref().unwrap_or("<none>")
)
})
.collect::<Vec<_>>();
let mut details = vec![
format!("Repo exists {}", preflight.repo_exists),
format!("Worktree exists {}", preflight.worktree_exists),
format!("Git dir exists {}", preflight.git_dir_exists),
format!("Branch behind {}", preflight.branch_freshness.behind),
format!("Trust allowlist {:?}", preflight.trust_gate_allowed),
format!("Trusted roots {}", preflight.trusted_roots_count),
format!(
"MCP eligible {} · servers {}",
preflight.mcp_startup_eligible, preflight.mcp_servers_configured
),
format!(
"Plugin eligible {} · configured {}",
preflight.plugin_startup_eligible, preflight.plugins_configured
),
format!(
"Last failed boot {}",
preflight
.last_failed_boot_reason
.as_deref()
.unwrap_or("<none>")
),
];
details.extend(preflight.required_binaries.iter().map(|binary| {
format!(
"Required binary {} available={}",
binary.name, binary.available
)
}));
details.extend(socket_details);
DiagnosticCheck::new(
"Boot preflight",
if preflight.repo_exists && preflight.worktree_exists && missing_binaries.is_empty() {
DiagnosticLevel::Ok
} else {
DiagnosticLevel::Warn
},
preflight.summary(),
)
.with_details(details)
.with_data(Map::from_iter([(
"boot_preflight".to_string(),
preflight.json_value(),
)]))
}
fn check_sandbox_health(status: &runtime::SandboxStatus) -> DiagnosticCheck { fn check_sandbox_health(status: &runtime::SandboxStatus) -> DiagnosticCheck {
let degraded = status.enabled && !status.active; let degraded = status.enabled && !status.active;
let mut details = vec![ let mut details = vec![
@@ -2614,9 +2697,16 @@ fn print_bootstrap_plan(output_format: CliOutputFormat) -> Result<(), Box<dyn st
fn print_system_prompt( fn print_system_prompt(
cwd: PathBuf, cwd: PathBuf,
date: String, date: String,
model: &str,
output_format: CliOutputFormat, output_format: CliOutputFormat,
) -> Result<(), Box<dyn std::error::Error>> { ) -> Result<(), Box<dyn std::error::Error>> {
let sections = load_system_prompt(cwd, date, env::consts::OS, "unknown")?; let sections = load_system_prompt(
cwd,
date,
env::consts::OS,
"unknown",
model_family_identity_for(model),
)?;
let message = sections.join( let message = sections.join(
" "
@@ -2829,7 +2919,9 @@ struct StatusContext {
project_root: Option<PathBuf>, project_root: Option<PathBuf>,
git_branch: Option<String>, git_branch: Option<String>,
git_summary: GitWorkspaceSummary, git_summary: GitWorkspaceSummary,
branch_freshness: BranchFreshness,
session_lifecycle: SessionLifecycleSummary, session_lifecycle: SessionLifecycleSummary,
boot_preflight: BootPreflightSnapshot,
sandbox_status: runtime::SandboxStatus, sandbox_status: runtime::SandboxStatus,
/// #143: when `.claw.json` (or another loaded config file) fails to parse, /// #143: when `.claw.json` (or another loaded config file) fails to parse,
/// we capture the parse error here and still populate every field that /// we capture the parse error here and still populate every field that
@@ -2840,6 +2932,162 @@ struct StatusContext {
config_load_error: Option<String>, config_load_error: Option<String>,
} }
#[derive(Debug, Clone, PartialEq, Eq)]
struct BranchFreshness {
upstream: Option<String>,
ahead: u32,
behind: u32,
fresh: Option<bool>,
}
impl BranchFreshness {
fn from_git_status(status: Option<&str>) -> Self {
let first_line = status
.and_then(|status| status.lines().next())
.unwrap_or_default();
let upstream = first_line
.split_once("...")
.and_then(|(_, rest)| rest.split([' ', '[']).next())
.filter(|value| !value.is_empty())
.map(ToOwned::to_owned);
let mut ahead = 0;
let mut behind = 0;
if let Some((_, bracketed)) = first_line.split_once('[') {
let bracketed = bracketed.trim_end_matches(']');
for part in bracketed.split(',').map(str::trim) {
if let Some(value) = part.strip_prefix("ahead ") {
ahead = value.parse().unwrap_or(0);
} else if let Some(value) = part.strip_prefix("behind ") {
behind = value.parse().unwrap_or(0);
}
}
}
let fresh = upstream.as_ref().map(|_| behind == 0);
Self {
upstream,
ahead,
behind,
fresh,
}
}
fn json_value(&self) -> serde_json::Value {
json!({
"upstream": self.upstream,
"ahead": self.ahead,
"behind": self.behind,
"fresh": self.fresh,
})
}
}
#[derive(Debug, Clone, PartialEq, Eq)]
struct BinaryPreflight {
name: &'static str,
available: bool,
}
impl BinaryPreflight {
fn json_value(&self) -> serde_json::Value {
json!({
"name": self.name,
"available": self.available,
})
}
}
#[derive(Debug, Clone, PartialEq, Eq)]
struct ControlSocketPreflight {
name: &'static str,
configured: bool,
exists: bool,
path: Option<String>,
}
impl ControlSocketPreflight {
fn json_value(&self) -> serde_json::Value {
json!({
"name": self.name,
"configured": self.configured,
"exists": self.exists,
"path": self.path,
})
}
}
#[derive(Debug, Clone, PartialEq, Eq)]
struct BootPreflightSnapshot {
repo_exists: bool,
worktree_exists: bool,
git_dir_exists: bool,
branch_freshness: BranchFreshness,
trust_gate_allowed: Option<bool>,
trusted_roots_count: usize,
required_binaries: Vec<BinaryPreflight>,
control_sockets: Vec<ControlSocketPreflight>,
mcp_startup_eligible: bool,
mcp_servers_configured: usize,
plugin_startup_eligible: bool,
plugins_configured: usize,
last_failed_boot_reason: Option<String>,
}
impl BootPreflightSnapshot {
fn json_value(&self) -> serde_json::Value {
json!({
"repo": {
"exists": self.repo_exists,
"worktree_exists": self.worktree_exists,
"git_dir_exists": self.git_dir_exists,
},
"branch_freshness": self.branch_freshness.json_value(),
"trust_gate": {
"allowlisted": self.trust_gate_allowed,
"trusted_roots_count": self.trusted_roots_count,
},
"required_binaries": self.required_binaries.iter().map(BinaryPreflight::json_value).collect::<Vec<_>>(),
"control_sockets": self.control_sockets.iter().map(ControlSocketPreflight::json_value).collect::<Vec<_>>(),
"mcp_startup": {
"eligible": self.mcp_startup_eligible,
"servers_configured": self.mcp_servers_configured,
},
"plugin_startup": {
"eligible": self.plugin_startup_eligible,
"plugins_configured": self.plugins_configured,
},
"last_failed_boot_reason": self.last_failed_boot_reason,
})
}
fn summary(&self) -> String {
let trust = self
.trust_gate_allowed
.map(|value| {
if value {
"allowlisted"
} else {
"not allowlisted"
}
})
.unwrap_or("unknown");
let freshness = self
.branch_freshness
.fresh
.map(|fresh| if fresh { "fresh" } else { "behind" })
.unwrap_or("no upstream");
format!(
"repo={} worktree={} branch={} trust={} mcp={} plugins={} last_failed={}",
self.repo_exists,
self.worktree_exists,
freshness,
trust,
self.mcp_startup_eligible,
self.plugin_startup_eligible,
self.last_failed_boot_reason.as_deref().unwrap_or("none")
)
}
}
#[derive(Debug, Clone, Copy)] #[derive(Debug, Clone, Copy)]
struct StatusUsage { struct StatusUsage {
message_count: usize, message_count: usize,
@@ -3282,6 +3530,123 @@ fn parse_git_workspace_summary(status: Option<&str>) -> GitWorkspaceSummary {
summary summary
} }
fn build_boot_preflight_snapshot(
cwd: &Path,
project_root: Option<&Path>,
git_status: Option<&str>,
runtime_config: Option<&runtime::RuntimeConfig>,
config_load_error: Option<&str>,
) -> BootPreflightSnapshot {
let branch_freshness = BranchFreshness::from_git_status(git_status);
let worktree_exists = run_git_bool(cwd, &["rev-parse", "--is-inside-work-tree"]);
let git_dir_exists = run_git_capture_in(cwd, &["rev-parse", "--git-dir"])
.map(|path| {
let path = PathBuf::from(path.trim());
if path.is_absolute() {
path
} else {
cwd.join(path)
}
})
.is_some_and(|path| path.exists());
let trusted_roots = runtime_config
.map(runtime::RuntimeConfig::trusted_roots)
.unwrap_or(&[]);
let trust_gate_allowed = runtime_config.map(|_| {
trusted_roots
.iter()
.any(|root| path_matches_trusted_root_local(cwd, root))
});
let plugin_configured = runtime_config
.map(|config| config.plugins().enabled_plugins().len())
.unwrap_or_default();
let mcp_configured = runtime_config
.map(|config| config.mcp().servers().len())
.unwrap_or_default();
let config_ok = config_load_error.is_none();
BootPreflightSnapshot {
repo_exists: project_root.is_some_and(Path::exists),
worktree_exists,
git_dir_exists,
branch_freshness,
trust_gate_allowed,
trusted_roots_count: trusted_roots.len(),
required_binaries: vec![
BinaryPreflight {
name: "claw",
available: env::current_exe().is_ok_and(|path| path.exists()),
},
BinaryPreflight {
name: "git",
available: command_available("git"),
},
BinaryPreflight {
name: "tmux",
available: command_available("tmux"),
},
],
control_sockets: vec![tmux_control_socket_preflight()],
mcp_startup_eligible: config_ok,
mcp_servers_configured: mcp_configured,
plugin_startup_eligible: config_ok,
plugins_configured: plugin_configured,
last_failed_boot_reason: last_failed_boot_reason(cwd),
}
}
fn run_git_bool(cwd: &Path, args: &[&str]) -> bool {
Command::new("git")
.args(args)
.current_dir(cwd)
.output()
.is_ok_and(|output| output.status.success())
}
fn command_available(command: &str) -> bool {
Command::new(command)
.arg("--version")
.output()
.is_ok_and(|output| output.status.success())
}
fn tmux_control_socket_preflight() -> ControlSocketPreflight {
let path = env::var("TMUX")
.ok()
.and_then(|value| value.split(',').next().map(str::to_string))
.filter(|value| !value.is_empty());
let exists = path.as_ref().is_some_and(|path| Path::new(path).exists());
ControlSocketPreflight {
name: "tmux",
configured: path.is_some(),
exists,
path,
}
}
fn last_failed_boot_reason(cwd: &Path) -> Option<String> {
env::var("CLAW_LAST_FAILED_BOOT_REASON")
.ok()
.filter(|value| !value.trim().is_empty())
.or_else(|| {
fs::read_to_string(cwd.join(".claw").join("last-failed-boot.txt"))
.ok()
.map(|value| value.trim().to_string())
.filter(|value| !value.is_empty())
})
}
fn path_matches_trusted_root_local(cwd: &Path, trusted_root: &str) -> bool {
let cwd = fs::canonicalize(cwd).unwrap_or_else(|_| cwd.to_path_buf());
let trusted_root = Path::new(trusted_root);
let trusted_root = if trusted_root.is_absolute() {
trusted_root.to_path_buf()
} else {
cwd.join(trusted_root)
};
let trusted_root = fs::canonicalize(&trusted_root).unwrap_or(trusted_root);
cwd == trusted_root || cwd.starts_with(trusted_root)
}
fn resolve_git_branch_for(cwd: &Path) -> Option<String> { fn resolve_git_branch_for(cwd: &Path) -> Option<String> {
let branch = run_git_capture_in(cwd, &["branch", "--show-current"])?; let branch = run_git_capture_in(cwd, &["branch", "--show-current"])?;
let branch = branch.trim(); let branch = branch.trim();
@@ -4394,7 +4759,7 @@ impl LiveCli {
allowed_tools: Option<AllowedToolSet>, allowed_tools: Option<AllowedToolSet>,
permission_mode: PermissionMode, permission_mode: PermissionMode,
) -> Result<Self, Box<dyn std::error::Error>> { ) -> Result<Self, Box<dyn std::error::Error>> {
let system_prompt = build_system_prompt()?; let system_prompt = build_system_prompt(&model)?;
let session_state = new_cli_session()?; let session_state = new_cli_session()?;
let session = create_managed_session_handle(&session_state.session_id)?; let session = create_managed_session_handle(&session_state.session_id)?;
let runtime = build_runtime( let runtime = build_runtime(
@@ -4530,6 +4895,10 @@ impl LiveCli {
TerminalRenderer::new().color_theme(), TerminalRenderer::new().color_theme(),
&mut stdout, &mut stdout,
)?; )?;
let final_text = final_assistant_text(&summary);
if !final_text.is_empty() {
println!("{final_text}");
}
println!(); println!();
if let Some(event) = summary.auto_compaction { if let Some(event) = summary.auto_compaction {
println!( println!(
@@ -5794,6 +6163,8 @@ fn status_json_value(
path.file_stem().map(|n| n.to_string_lossy().into_owned()) path.file_stem().map(|n| n.to_string_lossy().into_owned())
}), }),
"session_lifecycle": context.session_lifecycle.json_value(), "session_lifecycle": context.session_lifecycle.json_value(),
"branch_freshness": context.branch_freshness.json_value(),
"boot_preflight": context.boot_preflight.json_value(),
"loaded_config_files": context.loaded_config_files, "loaded_config_files": context.loaded_config_files,
"discovered_config_files": context.discovered_config_files, "discovered_config_files": context.discovered_config_files,
"memory_file_count": context.memory_file_count, "memory_file_count": context.memory_file_count,
@@ -5827,7 +6198,8 @@ fn status_context(
// so that one malformed `mcpServers.*` entry doesn't take down the whole // so that one malformed `mcpServers.*` entry doesn't take down the whole
// health surface (workspace, git, model, permission, sandbox can still be // health surface (workspace, git, model, permission, sandbox can still be
// reported independently). // reported independently).
let (loaded_config_files, sandbox_status, config_load_error) = match loader.load() { let runtime_config = loader.load();
let (loaded_config_files, sandbox_status, config_load_error) = match runtime_config.as_ref() {
Ok(runtime_config) => ( Ok(runtime_config) => (
runtime_config.loaded_entries().len(), runtime_config.loaded_entries().len(),
resolve_sandbox_status(runtime_config.sandbox(), &cwd), resolve_sandbox_status(runtime_config.sandbox(), &cwd),
@@ -5848,6 +6220,14 @@ fn status_context(
let (project_root, git_branch) = let (project_root, git_branch) =
parse_git_status_metadata(project_context.git_status.as_deref()); parse_git_status_metadata(project_context.git_status.as_deref());
let git_summary = parse_git_workspace_summary(project_context.git_status.as_deref()); let git_summary = parse_git_workspace_summary(project_context.git_status.as_deref());
let branch_freshness = BranchFreshness::from_git_status(project_context.git_status.as_deref());
let boot_preflight = build_boot_preflight_snapshot(
&cwd,
project_root.as_deref(),
project_context.git_status.as_deref(),
runtime_config.as_ref().ok(),
config_load_error.as_deref(),
);
Ok(StatusContext { Ok(StatusContext {
cwd: cwd.clone(), cwd: cwd.clone(),
session_path: session_path.map(Path::to_path_buf), session_path: session_path.map(Path::to_path_buf),
@@ -5857,7 +6237,9 @@ fn status_context(
project_root, project_root,
git_branch, git_branch,
git_summary, git_summary,
branch_freshness,
session_lifecycle: classify_session_lifecycle_for(&cwd), session_lifecycle: classify_session_lifecycle_for(&cwd),
boot_preflight,
sandbox_status, sandbox_status,
config_load_error, config_load_error,
}) })
@@ -5932,6 +6314,8 @@ fn format_status_report(
Untracked {} Untracked {}
Session {} Session {}
Lifecycle {} Lifecycle {}
Branch fresh {}
Boot preflight {}
Config files loaded {}/{} Config files loaded {}/{}
Memory files {} Memory files {}
Suggested flow /status → /diff → /commit", Suggested flow /status → /diff → /commit",
@@ -5951,6 +6335,12 @@ fn format_status_report(
|path| path.display().to_string() |path| path.display().to_string()
), ),
context.session_lifecycle.signal(), context.session_lifecycle.signal(),
context
.branch_freshness
.fresh
.map(|fresh| if fresh { "yes" } else { "behind" })
.unwrap_or("no upstream"),
context.boot_preflight.summary(),
context.loaded_config_files, context.loaded_config_files,
context.discovered_config_files, context.discovered_config_files,
context.memory_file_count, context.memory_file_count,
@@ -7005,6 +7395,7 @@ fn render_export_text(session: &Session) -> String {
for block in &message.blocks { for block in &message.blocks {
match block { match block {
ContentBlock::Text { text } => lines.push(text.clone()), ContentBlock::Text { text } => lines.push(text.clone()),
ContentBlock::Thinking { .. } => {}
ContentBlock::ToolUse { id, name, input } => { ContentBlock::ToolUse { id, name, input } => {
lines.push(format!("[tool_use id={id} name={name}] {input}")); lines.push(format!("[tool_use id={id} name={name}] {input}"));
} }
@@ -7191,6 +7582,7 @@ fn render_session_markdown(session: &Session, session_id: &str, session_path: &P
lines.push(String::new()); lines.push(String::new());
} }
} }
ContentBlock::Thinking { .. } => {}
ContentBlock::ToolUse { id, name, input } => { ContentBlock::ToolUse { id, name, input } => {
lines.push(format!( lines.push(format!(
"**Tool call** `{name}` _(id `{}`)_", "**Tool call** `{name}` _(id `{}`)_",
@@ -7244,12 +7636,13 @@ fn short_tool_id(id: &str) -> String {
format!("{prefix}") format!("{prefix}")
} }
fn build_system_prompt() -> Result<Vec<String>, Box<dyn std::error::Error>> { fn build_system_prompt(model: &str) -> Result<Vec<String>, Box<dyn std::error::Error>> {
Ok(load_system_prompt( Ok(load_system_prompt(
env::current_dir()?, env::current_dir()?,
DEFAULT_DATE, DEFAULT_DATE,
env::consts::OS, env::consts::OS,
"unknown", "unknown",
model_family_identity_for(model),
)?) )?)
} }
@@ -9211,26 +9604,29 @@ fn convert_messages(messages: &[ConversationMessage]) -> Vec<InputMessage> {
let content = message let content = message
.blocks .blocks
.iter() .iter()
.map(|block| match block { .filter_map(|block| match block {
ContentBlock::Text { text } => InputContentBlock::Text { text: text.clone() }, ContentBlock::Text { text } => {
ContentBlock::ToolUse { id, name, input } => InputContentBlock::ToolUse { Some(InputContentBlock::Text { text: text.clone() })
}
ContentBlock::Thinking { .. } => None,
ContentBlock::ToolUse { id, name, input } => Some(InputContentBlock::ToolUse {
id: id.clone(), id: id.clone(),
name: name.clone(), name: name.clone(),
input: serde_json::from_str(input) input: serde_json::from_str(input)
.unwrap_or_else(|_| serde_json::json!({ "raw": input })), .unwrap_or_else(|_| serde_json::json!({ "raw": input })),
}, }),
ContentBlock::ToolResult { ContentBlock::ToolResult {
tool_use_id, tool_use_id,
output, output,
is_error, is_error,
.. ..
} => InputContentBlock::ToolResult { } => Some(InputContentBlock::ToolResult {
tool_use_id: tool_use_id.clone(), tool_use_id: tool_use_id.clone(),
content: vec![ToolResultContentBlock::Text { content: vec![ToolResultContentBlock::Text {
text: output.clone(), text: output.clone(),
}], }],
is_error: *is_error, is_error: *is_error,
}, }),
}) })
.collect::<Vec<_>>(); .collect::<Vec<_>>();
(!content.is_empty()).then(|| InputMessage { (!content.is_empty()).then(|| InputMessage {
@@ -9628,7 +10024,9 @@ mod tests {
"{rendered}" "{rendered}"
); );
assert!( assert!(
rendered.contains("Detail Input tokens exceed the configured limit of 922000 tokens."), rendered.contains(
"Detail Input tokens exceed the configured limit of 922000 tokens."
),
"{rendered}" "{rendered}"
); );
assert!(rendered.contains("Compact /compact"), "{rendered}"); assert!(rendered.contains("Compact /compact"), "{rendered}");
@@ -10264,6 +10662,7 @@ mod tests {
#[test] #[test]
fn parses_system_prompt_options() { fn parses_system_prompt_options() {
// given: system-prompt options for cwd and date
let args = vec![ let args = vec![
"system-prompt".to_string(), "system-prompt".to_string(),
"--cwd".to_string(), "--cwd".to_string(),
@@ -10271,16 +10670,43 @@ mod tests {
"--date".to_string(), "--date".to_string(),
"2026-04-01".to_string(), "2026-04-01".to_string(),
]; ];
// when: parsing the direct system-prompt command
let action = parse_args(&args).expect("args should parse");
// then: the action carries prompt options and default model
assert_eq!( assert_eq!(
parse_args(&args).expect("args should parse"), action,
CliAction::PrintSystemPrompt { CliAction::PrintSystemPrompt {
cwd: PathBuf::from("/tmp/project"), cwd: PathBuf::from("/tmp/project"),
date: "2026-04-01".to_string(), date: "2026-04-01".to_string(),
model: DEFAULT_MODEL.to_string(),
output_format: CliOutputFormat::Text, output_format: CliOutputFormat::Text,
} }
); );
} }
#[test]
fn parses_global_model_for_system_prompt() {
// given: a global OpenAI-compatible model before system-prompt
let args = vec![
"--model".to_string(),
"openai/gpt-4.1-mini".to_string(),
"system-prompt".to_string(),
];
// when: parsing the CLI arguments
let action = parse_args(&args).expect("args should parse");
// then: the system-prompt action carries the selected model
match action {
CliAction::PrintSystemPrompt { model, .. } => {
assert_eq!(model, "openai/gpt-4.1-mini");
}
other => panic!("expected PrintSystemPrompt, got {other:?}"),
}
}
#[test] #[test]
fn removed_login_and_logout_subcommands_error_helpfully() { fn removed_login_and_logout_subcommands_error_helpfully() {
let login = parse_args(&["login".to_string()]).expect_err("login should be removed"); let login = parse_args(&["login".to_string()]).expect_err("login should be removed");
@@ -12067,6 +12493,33 @@ mod tests {
assert!(report.contains("Switch models with /model <name>")); assert!(report.contains("Switch models with /model <name>"));
} }
fn test_branch_freshness() -> super::BranchFreshness {
super::BranchFreshness {
upstream: Some("origin/main".to_string()),
ahead: 0,
behind: 0,
fresh: Some(true),
}
}
fn test_boot_preflight() -> super::BootPreflightSnapshot {
super::BootPreflightSnapshot {
repo_exists: true,
worktree_exists: true,
git_dir_exists: true,
branch_freshness: test_branch_freshness(),
trust_gate_allowed: Some(false),
trusted_roots_count: 0,
required_binaries: Vec::new(),
control_sockets: Vec::new(),
mcp_startup_eligible: true,
mcp_servers_configured: 0,
plugin_startup_eligible: true,
plugins_configured: 0,
last_failed_boot_reason: None,
}
}
#[test] #[test]
fn model_switch_report_preserves_context_summary() { fn model_switch_report_preserves_context_summary() {
let report = format_model_switch_report("claude-sonnet", "claude-opus", 9); let report = format_model_switch_report("claude-sonnet", "claude-opus", 9);
@@ -12113,6 +12566,7 @@ mod tests {
untracked_files: 1, untracked_files: 1,
conflicted_files: 0, conflicted_files: 0,
}, },
branch_freshness: test_branch_freshness(),
session_lifecycle: SessionLifecycleSummary { session_lifecycle: SessionLifecycleSummary {
kind: SessionLifecycleKind::IdleShell, kind: SessionLifecycleKind::IdleShell,
pane_id: Some("%7".to_string()), pane_id: Some("%7".to_string()),
@@ -12121,6 +12575,7 @@ mod tests {
workspace_dirty: true, workspace_dirty: true,
abandoned: true, abandoned: true,
}, },
boot_preflight: test_boot_preflight(),
sandbox_status: runtime::SandboxStatus::default(), sandbox_status: runtime::SandboxStatus::default(),
config_load_error: None, config_load_error: None,
}, },
@@ -12248,6 +12703,7 @@ mod tests {
project_root: Some(PathBuf::from("/tmp/project")), project_root: Some(PathBuf::from("/tmp/project")),
git_branch: Some("feature/session-lifecycle".to_string()), git_branch: Some("feature/session-lifecycle".to_string()),
git_summary: GitWorkspaceSummary::default(), git_summary: GitWorkspaceSummary::default(),
branch_freshness: test_branch_freshness(),
session_lifecycle: SessionLifecycleSummary { session_lifecycle: SessionLifecycleSummary {
kind: SessionLifecycleKind::RunningProcess, kind: SessionLifecycleKind::RunningProcess,
pane_id: Some("%9".to_string()), pane_id: Some("%9".to_string()),
@@ -12256,6 +12712,7 @@ mod tests {
workspace_dirty: false, workspace_dirty: false,
abandoned: false, abandoned: false,
}, },
boot_preflight: test_boot_preflight(),
sandbox_status: runtime::SandboxStatus::default(), sandbox_status: runtime::SandboxStatus::default(),
config_load_error: None, config_load_error: None,
}; };
@@ -12284,6 +12741,67 @@ mod tests {
"claw" "claw"
); );
assert_eq!(value["workspace"]["session_lifecycle"]["abandoned"], false); assert_eq!(value["workspace"]["session_lifecycle"]["abandoned"], false);
assert_eq!(value["workspace"]["branch_freshness"]["fresh"], true);
assert_eq!(
value["workspace"]["boot_preflight"]["repo"]["worktree_exists"],
true
);
assert_eq!(
value["workspace"]["boot_preflight"]["mcp_startup"]["eligible"],
true
);
assert_eq!(
value["workspace"]["boot_preflight"]["last_failed_boot_reason"],
serde_json::Value::Null
);
}
#[test]
fn branch_freshness_parses_ahead_behind_status_header() {
let freshness = super::BranchFreshness::from_git_status(Some(
"## feature/boot...origin/feature/boot [ahead 2, behind 3]\n M src/main.rs",
));
assert_eq!(freshness.upstream.as_deref(), Some("origin/feature/boot"));
assert_eq!(freshness.ahead, 2);
assert_eq!(freshness.behind, 3);
assert_eq!(freshness.fresh, Some(false));
}
#[test]
fn boot_preflight_snapshot_reports_machine_readable_contract_fields() {
let _guard = env_lock();
let workspace = temp_workspace("boot-preflight-json");
fs::create_dir_all(&workspace).expect("workspace should create");
git(&["init", "--quiet"], &workspace);
git(&["config", "user.email", "tests@example.com"], &workspace);
git(&["config", "user.name", "Rusty Claude Tests"], &workspace);
fs::write(workspace.join("tracked.txt"), "hello\n").expect("write tracked");
fs::write(workspace.join(".claw.json"), r#"{"trustedRoots": ["."]}"#)
.expect("write config");
git(&["add", "tracked.txt"], &workspace);
git(&["commit", "-m", "init", "--quiet"], &workspace);
let loader = ConfigLoader::default_for(&workspace);
let config = loader.load().expect("config should load");
let status = super::run_git_capture_in(&workspace, &["status", "--short", "--branch"]);
let snapshot = super::build_boot_preflight_snapshot(
&workspace,
Some(&workspace),
status.as_deref(),
Some(&config),
None,
);
let json = snapshot.json_value();
assert_eq!(json["repo"]["exists"], true);
assert_eq!(json["repo"]["worktree_exists"], true);
assert_eq!(json["trust_gate"]["allowlisted"], true);
assert_eq!(json["mcp_startup"]["eligible"], true);
assert!(json["required_binaries"]
.as_array()
.is_some_and(|items| { items.iter().any(|item| item["name"] == "git") }));
fs::remove_dir_all(workspace).expect("cleanup temp dir");
} }
#[test] #[test]

View File

@@ -126,6 +126,66 @@ fn compact_flag_streaming_text_only_emits_final_message_text() {
fs::remove_dir_all(&workspace).expect("workspace cleanup should succeed"); fs::remove_dir_all(&workspace).expect("workspace cleanup should succeed");
} }
#[test]
fn text_prompt_mode_prints_final_assistant_text_after_spinner() {
// given a workspace pointed at the mock Anthropic service running the
// streaming_text scenario which only emits a single assistant text block
let runtime = tokio::runtime::Runtime::new().expect("tokio runtime should build");
let server = runtime
.block_on(MockAnthropicService::spawn())
.expect("mock service should start");
let base_url = server.base_url();
let workspace = unique_temp_dir("text-prompt-mode");
let config_home = workspace.join("config-home");
let home = workspace.join("home");
fs::create_dir_all(&workspace).expect("workspace should exist");
fs::create_dir_all(&config_home).expect("config home should exist");
fs::create_dir_all(&home).expect("home should exist");
// when we invoke claw in normal text prompt mode for the streaming text scenario
let prompt = format!("{SCENARIO_PREFIX}streaming_text");
let output = run_claw(
&workspace,
&config_home,
&home,
&base_url,
&[
"--model",
"sonnet",
"--permission-mode",
"read-only",
&prompt,
],
);
// then stdout should contain the final assistant text, not just spinner output
assert!(
output.status.success(),
"text prompt run should succeed\nstdout:\n{}\n\nstderr:\n{}",
String::from_utf8_lossy(&output.stdout),
String::from_utf8_lossy(&output.stderr),
);
let stdout = String::from_utf8(output.stdout).expect("stdout should be utf8");
let plain_stdout = strip_ansi_codes(&stdout);
assert!(
plain_stdout.contains("Mock streaming says hello from the parity harness."),
"text prompt stdout should include the assistant text ({stdout:?})"
);
assert!(
plain_stdout.contains("✔ ✨ Done"),
"text prompt stdout should still include spinner completion ({stdout:?})"
);
assert!(
plain_stdout
.lines()
.any(|line| line == "Mock streaming says hello from the parity harness."),
"text prompt stdout should print the assistant text as its own line ({stdout:?})"
);
fs::remove_dir_all(&workspace).expect("workspace cleanup should succeed");
}
#[test] #[test]
fn compact_flag_with_json_output_emits_structured_json() { fn compact_flag_with_json_output_emits_structured_json() {
let runtime = tokio::runtime::Runtime::new().expect("tokio runtime should build"); let runtime = tokio::runtime::Runtime::new().expect("tokio runtime should build");
@@ -215,3 +275,21 @@ fn unique_temp_dir(label: &str) -> PathBuf {
std::process::id() std::process::id()
)) ))
} }
fn strip_ansi_codes(input: &str) -> String {
let mut output = String::with_capacity(input.len());
let mut chars = input.chars().peekable();
while let Some(ch) = chars.next() {
if ch == '\u{1b}' && matches!(chars.peek(), Some('[')) {
chars.next();
while let Some(next) = chars.next() {
if ('@'..='~').contains(&next) {
break;
}
}
continue;
}
output.push(ch);
}
output
}

View File

@@ -0,0 +1,138 @@
use std::fs;
use std::io::Write;
use std::path::PathBuf;
use std::process::{Command, Output, Stdio};
use std::sync::atomic::{AtomicU64, Ordering};
use std::time::{SystemTime, UNIX_EPOCH};
static TEMP_COUNTER: AtomicU64 = AtomicU64::new(0);
#[test]
fn compact_slash_command_in_repl_does_not_start_nested_tokio_runtime() {
// given
let workspace = unique_temp_dir("compact-repl-panic");
let config_home = workspace.join("config-home");
let home = workspace.join("home");
fs::create_dir_all(&workspace).expect("workspace should exist");
fs::create_dir_all(&config_home).expect("config home should exist");
fs::create_dir_all(&home).expect("home should exist");
// when
let output = run_claw_repl(&workspace, &config_home, &home, "/compact\n/exit\n");
// then
assert!(
output.status.success(),
"compact repl run should succeed\nstdout:\n{}\n\nstderr:\n{}",
String::from_utf8_lossy(&output.stdout),
String::from_utf8_lossy(&output.stderr),
);
let stderr = String::from_utf8(output.stderr).expect("stderr should be utf8");
assert!(
!stderr.contains("Cannot start a runtime"),
"stderr must not contain nested runtime panic: {stderr:?}"
);
assert!(
!stderr.contains("panicked at"),
"stderr must not contain panic output: {stderr:?}"
);
let stdout = String::from_utf8(output.stdout).expect("stdout should be utf8");
let plain_stdout = strip_ansi_codes(&stdout);
assert!(
plain_stdout.contains("Compaction skipped")
|| plain_stdout.contains("Result skipped")
|| plain_stdout.contains("Result compacted"),
"stdout should contain compact report output ({stdout:?})"
);
fs::remove_dir_all(&workspace).expect("workspace cleanup should succeed");
}
fn run_claw_repl(
cwd: &std::path::Path,
config_home: &std::path::Path,
home: &std::path::Path,
stdin: &str,
) -> Output {
let mut command = python_pty_command(env!("CARGO_BIN_EXE_claw"));
let mut child = command
.current_dir(cwd)
.env_clear()
.env("ANTHROPIC_API_KEY", "test-compact-repl-key")
.env("CLAW_CONFIG_HOME", config_home)
.env("HOME", home)
.env("NO_COLOR", "1")
.env("PATH", "/usr/bin:/bin")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
.expect("claw should launch");
child
.stdin
.as_mut()
.expect("stdin should be piped")
.write_all(stdin.as_bytes())
.expect("stdin should write");
child.wait_with_output().expect("claw should finish")
}
fn python_pty_command(claw: &str) -> Command {
let mut command = Command::new("python3");
command.args([
"-c",
r#"
import os
import pty
import subprocess
import sys
claw = sys.argv[1]
payload = sys.stdin.buffer.read()
master, slave = pty.openpty()
child = subprocess.Popen([claw], stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
os.close(slave)
os.write(master, payload)
stdout, stderr = child.communicate(timeout=30)
os.close(master)
sys.stdout.buffer.write(stdout)
sys.stderr.buffer.write(stderr)
raise SystemExit(child.returncode)
"#,
claw,
]);
command
}
fn unique_temp_dir(label: &str) -> PathBuf {
let millis = SystemTime::now()
.duration_since(UNIX_EPOCH)
.expect("clock should be after epoch")
.as_millis();
let counter = TEMP_COUNTER.fetch_add(1, Ordering::Relaxed);
std::env::temp_dir().join(format!(
"claw-{label}-{}-{millis}-{counter}",
std::process::id()
))
}
fn strip_ansi_codes(input: &str) -> String {
let mut output = String::with_capacity(input.len());
let mut chars = input.chars().peekable();
while let Some(ch) = chars.next() {
if ch == '\u{1b}' && matches!(chars.peek(), Some('[')) {
chars.next();
for next in chars.by_ref() {
if ('@'..='~').contains(&next) {
break;
}
}
continue;
}
output.push(ch);
}
output
}

View File

@@ -91,6 +91,32 @@ fn status_and_sandbox_emit_json_when_requested() {
assert!(sandbox["filesystem_mode"].as_str().is_some()); assert!(sandbox["filesystem_mode"].as_str().is_some());
} }
#[test]
fn status_json_surfaces_permission_mode_override_for_security_audit() {
let root = unique_temp_dir("status-json-permission-mode");
fs::create_dir_all(&root).expect("temp dir should exist");
let parsed = assert_json_command(
&root,
&[
"--permission-mode",
"read-only",
"--output-format",
"json",
"status",
],
);
assert_eq!(parsed["kind"], "status");
assert_eq!(parsed["permission_mode"], "read-only");
assert!(
parsed["workspace"]["cwd"].as_str().is_some(),
"status JSON should retain workspace context with permission mode"
);
fs::remove_dir_all(root).expect("cleanup temp dir");
}
#[test] #[test]
fn acp_guidance_emits_json_when_requested() { fn acp_guidance_emits_json_when_requested() {
let root = unique_temp_dir("acp-json"); let root = unique_temp_dir("acp-json");
@@ -284,7 +310,7 @@ fn doctor_and_resume_status_emit_json_when_requested() {
assert!(summary["failures"].as_u64().is_some()); assert!(summary["failures"].as_u64().is_some());
let checks = doctor["checks"].as_array().expect("doctor checks"); let checks = doctor["checks"].as_array().expect("doctor checks");
assert_eq!(checks.len(), 6); assert_eq!(checks.len(), 7);
let check_names = checks let check_names = checks
.iter() .iter()
.map(|check| { .map(|check| {
@@ -301,6 +327,7 @@ fn doctor_and_resume_status_emit_json_when_requested() {
"config", "config",
"install source", "install source",
"workspace", "workspace",
"boot preflight",
"sandbox", "sandbox",
"system" "system"
] ]
@@ -326,6 +353,14 @@ fn doctor_and_resume_status_emit_json_when_requested() {
assert!(workspace["cwd"].as_str().is_some()); assert!(workspace["cwd"].as_str().is_some());
assert!(workspace["in_git_repo"].is_boolean()); assert!(workspace["in_git_repo"].is_boolean());
let boot_preflight = checks
.iter()
.find(|check| check["name"] == "boot preflight")
.expect("boot preflight check");
assert!(boot_preflight["boot_preflight"]["repo"]["exists"].is_boolean());
assert!(boot_preflight["boot_preflight"]["mcp_startup"]["eligible"].is_boolean());
assert!(boot_preflight["boot_preflight"]["required_binaries"].is_array());
let sandbox = checks let sandbox = checks
.iter() .iter()
.find(|check| check["name"] == "sandbox") .find(|check| check["name"] == "sandbox")

View File

@@ -4,29 +4,30 @@ use std::process::Command;
use std::time::{Duration, Instant}; use std::time::{Duration, Instant};
use api::{ use api::{
max_tokens_for_model, resolve_model_alias, ApiError, ContentBlockDelta, InputContentBlock, max_tokens_for_model, model_family_identity_for, resolve_model_alias, ApiError,
InputMessage, MessageRequest, MessageResponse, OutputContentBlock, ProviderClient, ContentBlockDelta, InputContentBlock, InputMessage, MessageRequest, MessageResponse,
StreamEvent as ApiStreamEvent, ToolChoice, ToolDefinition, ToolResultContentBlock, OutputContentBlock, ProviderClient, StreamEvent as ApiStreamEvent, ToolChoice, ToolDefinition,
ToolResultContentBlock,
}; };
use plugins::PluginTool; use plugins::PluginTool;
use reqwest::blocking::Client; use reqwest::blocking::Client;
use runtime::{ use runtime::{
check_freshness, dedupe_superseded_commit_events, edit_file, execute_bash, glob_search, check_freshness, dedupe_superseded_commit_events, edit_file_in_workspace, execute_bash,
grep_search, load_system_prompt, glob_search_in_workspace, grep_search_in_workspace, load_system_prompt,
lsp_client::LspRegistry, lsp_client::LspRegistry,
mcp_tool_bridge::McpToolRegistry, mcp_tool_bridge::McpToolRegistry,
permission_enforcer::{EnforcementResult, PermissionEnforcer}, permission_enforcer::{EnforcementResult, PermissionEnforcer},
read_file, read_file_in_workspace,
summary_compression::compress_summary_text, summary_compression::compress_summary_text,
task_registry::TaskRegistry, task_registry::TaskRegistry,
team_cron_registry::{CronRegistry, TeamRegistry}, team_cron_registry::{CronRegistry, TeamRegistry},
worker_boot::{WorkerReadySnapshot, WorkerRegistry, WorkerTaskReceipt}, worker_boot::{WorkerReadySnapshot, WorkerRegistry, WorkerTaskReceipt},
write_file, ApiClient, ApiRequest, AssistantEvent, BashCommandInput, BashCommandOutput, write_file_in_workspace, ApiClient, ApiRequest, AssistantEvent, BashCommandInput,
BranchFreshness, ConfigLoader, ContentBlock, ConversationMessage, ConversationRuntime, BashCommandOutput, BranchFreshness, ConfigLoader, ContentBlock, ConversationMessage,
GrepSearchInput, LaneCommitProvenance, LaneEvent, LaneEventBlocker, LaneEventName, ConversationRuntime, GrepSearchInput, LaneCommitProvenance, LaneEvent, LaneEventBlocker,
LaneEventStatus, LaneFailureClass, McpDegradedReport, MessageRole, PermissionMode, LaneEventName, LaneEventStatus, LaneFailureClass, McpDegradedReport, MessageRole,
PermissionPolicy, PromptCacheEvent, ProviderFallbackConfig, RuntimeError, Session, TaskPacket, PermissionMode, PermissionPolicy, PromptCacheEvent, ProviderFallbackConfig, RuntimeError,
ToolError, ToolExecutor, Session, TaskPacket, ToolError, ToolExecutor,
}; };
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::{json, Value}; use serde_json::{json, Value};
@@ -1197,6 +1198,7 @@ pub fn execute_tool(name: &str, input: &Value) -> Result<String, String> {
execute_tool_with_enforcer(None, name, input) execute_tool_with_enforcer(None, name, input)
} }
#[allow(clippy::too_many_lines)]
fn execute_tool_with_enforcer( fn execute_tool_with_enforcer(
enforcer: Option<&PermissionEnforcer>, enforcer: Option<&PermissionEnforcer>,
name: &str, name: &str,
@@ -1211,24 +1213,34 @@ fn execute_tool_with_enforcer(
run_bash(bash_input) run_bash(bash_input)
} }
"read_file" => { "read_file" => {
maybe_enforce_permission_check(enforcer, name, input)?; let file_input: ReadFileInput = from_value(input)?;
from_value::<ReadFileInput>(input).and_then(run_read_file) let required_mode = classify_file_path_permission(&file_input.path, false);
maybe_enforce_permission_check_with_mode(enforcer, name, input, required_mode)?;
run_read_file(file_input)
} }
"write_file" => { "write_file" => {
maybe_enforce_permission_check(enforcer, name, input)?; let file_input: WriteFileInput = from_value(input)?;
from_value::<WriteFileInput>(input).and_then(run_write_file) let required_mode = classify_file_path_permission(&file_input.path, true);
maybe_enforce_permission_check_with_mode(enforcer, name, input, required_mode)?;
run_write_file(file_input)
} }
"edit_file" => { "edit_file" => {
maybe_enforce_permission_check(enforcer, name, input)?; let file_input: EditFileInput = from_value(input)?;
from_value::<EditFileInput>(input).and_then(run_edit_file) let required_mode = classify_file_path_permission(&file_input.path, false);
maybe_enforce_permission_check_with_mode(enforcer, name, input, required_mode)?;
run_edit_file(file_input)
} }
"glob_search" => { "glob_search" => {
maybe_enforce_permission_check(enforcer, name, input)?; let glob_input: GlobSearchInputValue = from_value(input)?;
from_value::<GlobSearchInputValue>(input).and_then(run_glob_search) let required_mode = classify_glob_permission(&glob_input);
maybe_enforce_permission_check_with_mode(enforcer, name, input, required_mode)?;
run_glob_search(glob_input)
} }
"grep_search" => { "grep_search" => {
maybe_enforce_permission_check(enforcer, name, input)?; let grep_input: GrepSearchInput = from_value(input)?;
from_value::<GrepSearchInput>(input).and_then(run_grep_search) let required_mode = classify_grep_permission(&grep_input);
maybe_enforce_permission_check_with_mode(enforcer, name, input, required_mode)?;
run_grep_search(grep_input)
} }
"WebFetch" => from_value::<WebFetchInput>(input).and_then(run_web_fetch), "WebFetch" => from_value::<WebFetchInput>(input).and_then(run_web_fetch),
"WebSearch" => from_value::<WebSearchInput>(input).and_then(run_web_search), "WebSearch" => from_value::<WebSearchInput>(input).and_then(run_web_search),
@@ -1297,17 +1309,6 @@ fn execute_tool_with_enforcer(
} }
} }
fn maybe_enforce_permission_check(
enforcer: Option<&PermissionEnforcer>,
tool_name: &str,
input: &Value,
) -> Result<(), String> {
if let Some(enforcer) = enforcer {
enforce_permission_check(enforcer, tool_name, input)?;
}
Ok(())
}
/// Enforce permission check with a dynamically classified permission mode. /// Enforce permission check with a dynamically classified permission mode.
/// Used for tools like bash and `PowerShell` where the required permission /// Used for tools like bash and `PowerShell` where the required permission
/// depends on the actual command being executed. /// depends on the actual command being executed.
@@ -1499,15 +1500,11 @@ fn run_task_output(input: TaskIdInput) -> Result<String, String> {
fn run_worker_create(input: WorkerCreateInput) -> Result<String, String> { fn run_worker_create(input: WorkerCreateInput) -> Result<String, String> {
// Merge config-level trusted_roots with per-call overrides. // Merge config-level trusted_roots with per-call overrides.
// Config provides the default allowlist; per-call roots add on top. // Config provides the default allowlist; per-call roots add on top.
let config_roots: Vec<String> = ConfigLoader::default_for(&input.cwd) let merged_roots: Vec<String> = ConfigLoader::default_for(&input.cwd)
.load() .load()
.ok() .ok()
.map(|c| c.trusted_roots().to_vec()) .map(|config| config.trusted_roots_with_overrides(&input.trusted_roots))
.unwrap_or_default(); .unwrap_or_else(|| input.trusted_roots.clone());
let merged_roots: Vec<String> = config_roots
.into_iter()
.chain(input.trusted_roots.iter().cloned())
.collect();
let worker = global_worker_registry().create( let worker = global_worker_registry().create(
&input.cwd, &input.cwd,
&merged_roots, &merged_roots,
@@ -1884,20 +1881,38 @@ fn classify_bash_permission(command: &str) -> PermissionMode {
fn has_dangerous_paths(command: &str) -> bool { fn has_dangerous_paths(command: &str) -> bool {
// Look for absolute paths // Look for absolute paths
let tokens: Vec<&str> = command.split_whitespace().collect(); let tokens: Vec<&str> = command.split_whitespace().collect();
let cwd = std::env::current_dir()
.ok()
.map(|cwd| cwd.canonicalize().unwrap_or(cwd));
for token in tokens { for token in tokens {
let token = token.trim_matches(|ch: char| {
matches!(
ch,
'"' | '\'' | '`' | ',' | ';' | ')' | '(' | '[' | ']' | '{' | '}'
)
});
// Skip flags/options // Skip flags/options
if token.starts_with('-') { if token.starts_with('-') {
continue; continue;
} }
if token.contains('$') {
return true;
}
if looks_like_windows_absolute_path(token) {
return true;
}
// Check for absolute paths // Check for absolute paths
if token.starts_with('/') || token.starts_with("~/") { if token.starts_with('/') || token.starts_with("~/") {
// Check if it's within CWD // Check if it's within CWD
let path = let path =
PathBuf::from(token.replace('~', &std::env::var("HOME").unwrap_or_default())); PathBuf::from(token.replace('~', &std::env::var("HOME").unwrap_or_default()));
if let Ok(cwd) = std::env::current_dir() { if let Some(cwd) = cwd.as_ref() {
if !path.starts_with(&cwd) { let resolved = path.canonicalize().unwrap_or(path);
if !resolved.starts_with(cwd) {
return true; // Path outside workspace return true; // Path outside workspace
} }
} }
@@ -1907,11 +1922,35 @@ fn has_dangerous_paths(command: &str) -> bool {
if token.contains("../..") || token.starts_with("../") && !token.starts_with("./") { if token.contains("../..") || token.starts_with("../") && !token.starts_with("./") {
return true; return true;
} }
if let Some(cwd) = cwd.as_ref() {
if token.starts_with('.') || token.contains('/') || Path::new(token).exists() {
let candidate = if Path::new(token).is_absolute() {
PathBuf::from(token)
} else {
cwd.join(token)
};
if let Ok(canonical) = candidate.canonicalize() {
if !canonical.starts_with(cwd) {
return true;
}
}
}
}
} }
false false
} }
fn looks_like_windows_absolute_path(token: &str) -> bool {
let bytes = token.as_bytes();
(bytes.len() >= 3
&& bytes[0].is_ascii_alphabetic()
&& bytes[1] == b':'
&& matches!(bytes[2], b'/' | b'\\'))
|| token.starts_with(r"\\")
}
fn run_bash(input: BashCommandInput) -> Result<String, String> { fn run_bash(input: BashCommandInput) -> Result<String, String> {
if let Some(output) = workspace_test_branch_preflight(&input.command) { if let Some(output) = workspace_test_branch_preflight(&input.command) {
return serde_json::to_string_pretty(&output).map_err(|error| error.to_string()); return serde_json::to_string_pretty(&output).map_err(|error| error.to_string());
@@ -1995,8 +2034,7 @@ fn git_ref_exists(reference: &str) -> bool {
Command::new("git") Command::new("git")
.args(["rev-parse", "--verify", "--quiet", reference]) .args(["rev-parse", "--verify", "--quiet", reference])
.output() .output()
.map(|output| output.status.success()) .is_ok_and(|output| output.status.success())
.unwrap_or(false)
} }
fn git_stdout(args: &[&str]) -> Option<String> { fn git_stdout(args: &[&str]) -> Option<String> {
@@ -2068,22 +2106,31 @@ fn branch_divergence_output(
#[allow(clippy::needless_pass_by_value)] #[allow(clippy::needless_pass_by_value)]
fn run_read_file(input: ReadFileInput) -> Result<String, String> { fn run_read_file(input: ReadFileInput) -> Result<String, String> {
to_pretty_json(read_file(&input.path, input.offset, input.limit).map_err(io_to_string)?) let workspace = std::env::current_dir().map_err(|error| error.to_string())?;
to_pretty_json(
read_file_in_workspace(&input.path, input.offset, input.limit, &workspace)
.map_err(io_to_string)?,
)
} }
#[allow(clippy::needless_pass_by_value)] #[allow(clippy::needless_pass_by_value)]
fn run_write_file(input: WriteFileInput) -> Result<String, String> { fn run_write_file(input: WriteFileInput) -> Result<String, String> {
to_pretty_json(write_file(&input.path, &input.content).map_err(io_to_string)?) let workspace = std::env::current_dir().map_err(|error| error.to_string())?;
to_pretty_json(
write_file_in_workspace(&input.path, &input.content, &workspace).map_err(io_to_string)?,
)
} }
#[allow(clippy::needless_pass_by_value)] #[allow(clippy::needless_pass_by_value)]
fn run_edit_file(input: EditFileInput) -> Result<String, String> { fn run_edit_file(input: EditFileInput) -> Result<String, String> {
let workspace = std::env::current_dir().map_err(|error| error.to_string())?;
to_pretty_json( to_pretty_json(
edit_file( edit_file_in_workspace(
&input.path, &input.path,
&input.old_string, &input.old_string,
&input.new_string, &input.new_string,
input.replace_all.unwrap_or(false), input.replace_all.unwrap_or(false),
&workspace,
) )
.map_err(io_to_string)?, .map_err(io_to_string)?,
) )
@@ -2091,12 +2138,17 @@ fn run_edit_file(input: EditFileInput) -> Result<String, String> {
#[allow(clippy::needless_pass_by_value)] #[allow(clippy::needless_pass_by_value)]
fn run_glob_search(input: GlobSearchInputValue) -> Result<String, String> { fn run_glob_search(input: GlobSearchInputValue) -> Result<String, String> {
to_pretty_json(glob_search(&input.pattern, input.path.as_deref()).map_err(io_to_string)?) let workspace = std::env::current_dir().map_err(|error| error.to_string())?;
to_pretty_json(
glob_search_in_workspace(&input.pattern, input.path.as_deref(), &workspace)
.map_err(io_to_string)?,
)
} }
#[allow(clippy::needless_pass_by_value)] #[allow(clippy::needless_pass_by_value)]
fn run_grep_search(input: GrepSearchInput) -> Result<String, String> { fn run_grep_search(input: GrepSearchInput) -> Result<String, String> {
to_pretty_json(grep_search(&input).map_err(io_to_string)?) let workspace = std::env::current_dir().map_err(|error| error.to_string())?;
to_pretty_json(grep_search_in_workspace(&input, &workspace).map_err(io_to_string)?)
} }
#[allow(clippy::needless_pass_by_value)] #[allow(clippy::needless_pass_by_value)]
@@ -2157,6 +2209,77 @@ fn run_repl(input: ReplInput) -> Result<String, String> {
to_pretty_json(execute_repl(input)?) to_pretty_json(execute_repl(input)?)
} }
fn classify_file_path_permission(path: &str, allow_missing: bool) -> PermissionMode {
if path_within_current_workspace(path, allow_missing) {
PermissionMode::WorkspaceWrite
} else {
PermissionMode::DangerFullAccess
}
}
fn classify_glob_permission(input: &GlobSearchInputValue) -> PermissionMode {
let base_allowed = input
.path
.as_deref()
.is_none_or(|path| path_within_current_workspace(path, false));
let pattern_allowed = path_within_current_workspace(&input.pattern, true);
if base_allowed && pattern_allowed {
PermissionMode::WorkspaceWrite
} else {
PermissionMode::DangerFullAccess
}
}
fn classify_grep_permission(input: &GrepSearchInput) -> PermissionMode {
if input
.path
.as_deref()
.is_none_or(|path| path_within_current_workspace(path, false))
{
PermissionMode::WorkspaceWrite
} else {
PermissionMode::DangerFullAccess
}
}
fn path_within_current_workspace(path: &str, allow_missing: bool) -> bool {
let trimmed = path.trim_matches(|ch: char| {
matches!(
ch,
'"' | '\'' | '`' | ',' | ';' | ')' | '(' | '[' | ']' | '{' | '}'
)
});
if looks_like_windows_absolute_path(trimmed) {
return false;
}
let Ok(cwd) = std::env::current_dir() else {
return false;
};
let cwd = cwd.canonicalize().unwrap_or(cwd);
let candidate = PathBuf::from(trimmed);
let absolute = if candidate.is_absolute() {
candidate
} else {
cwd.join(candidate)
};
let resolved = if allow_missing {
absolute
.parent()
.and_then(|parent| parent.canonicalize().ok())
.map(|parent| parent.join(absolute.file_name().unwrap_or_default()))
.unwrap_or(absolute)
} else {
match absolute.canonicalize() {
Ok(path) => path,
Err(_) => absolute,
}
};
resolved.starts_with(cwd)
}
/// Classify `PowerShell` command permission based on command type and path. /// Classify `PowerShell` command permission based on command type and path.
/// ROADMAP #50: Read-only commands targeting CWD paths get `WorkspaceWrite`, /// ROADMAP #50: Read-only commands targeting CWD paths get `WorkspaceWrite`,
/// all others remain `DangerFullAccess`. /// all others remain `DangerFullAccess`.
@@ -2216,12 +2339,24 @@ fn extract_powershell_path(command: &str) -> Option<String> {
/// Check if a path is within the current workspace. /// Check if a path is within the current workspace.
fn is_within_workspace(path: &str) -> bool { fn is_within_workspace(path: &str) -> bool {
let path = PathBuf::from(path); let trimmed = path.trim_matches(|ch: char| {
matches!(
ch,
'"' | '\'' | '`' | ',' | ';' | ')' | '(' | '[' | ']' | '{' | '}'
)
});
if looks_like_windows_absolute_path(trimmed) {
return false;
}
let path = PathBuf::from(trimmed);
// If path is absolute, check if it starts with CWD // If path is absolute, check if it starts with CWD
if path.is_absolute() { if path.is_absolute() {
if let Ok(cwd) = std::env::current_dir() { if let Ok(cwd) = std::env::current_dir() {
return path.starts_with(&cwd); let cwd = cwd.canonicalize().unwrap_or(cwd);
let resolved = path.canonicalize().unwrap_or(path);
return resolved.starts_with(&cwd);
} }
} }
@@ -3075,27 +3210,33 @@ fn extract_quoted_value(input: &str) -> Option<(String, &str)> {
} }
fn decode_duckduckgo_redirect(url: &str) -> Option<String> { fn decode_duckduckgo_redirect(url: &str) -> Option<String> {
if url.starts_with("http://") || url.starts_with("https://") { let decoded = html_entity_decode_url(url);
return Some(html_entity_decode_url(url)); let parsed = if decoded.starts_with("http://") || decoded.starts_with("https://") {
} reqwest::Url::parse(&decoded).ok()
} else if decoded.starts_with("//") {
let joined = if url.starts_with("//") { reqwest::Url::parse(&format!("https:{decoded}")).ok()
format!("https:{url}") } else if decoded.starts_with('/') {
} else if url.starts_with('/') { reqwest::Url::parse(&format!("https://duckduckgo.com{decoded}")).ok()
format!("https://duckduckgo.com{url}")
} else { } else {
return None; return None;
}; }?;
let parsed = reqwest::Url::parse(&joined).ok()?; let host = parsed.host_str().unwrap_or_default().to_ascii_lowercase();
if parsed.path() == "/l/" || parsed.path() == "/l" { if (host == "duckduckgo.com" || host.ends_with(".duckduckgo.com"))
&& (parsed.path() == "/l/" || parsed.path() == "/l")
{
for (key, value) in parsed.query_pairs() { for (key, value) in parsed.query_pairs() {
if key == "uddg" { if key == "uddg" {
return Some(html_entity_decode_url(value.as_ref())); return Some(html_entity_decode_url(value.as_ref()));
} }
} }
} }
Some(joined)
if decoded.starts_with("http://") || decoded.starts_with("https://") {
Some(decoded)
} else {
Some(parsed.to_string())
}
} }
fn html_entity_decode_url(url: &str) -> String { fn html_entity_decode_url(url: &str) -> String {
@@ -3510,7 +3651,7 @@ where
.filter(|name| !name.is_empty()) .filter(|name| !name.is_empty())
.unwrap_or_else(|| slugify_agent_name(&input.description)); .unwrap_or_else(|| slugify_agent_name(&input.description));
let created_at = iso8601_now(); let created_at = iso8601_now();
let system_prompt = build_agent_system_prompt(&normalized_subagent_type)?; let system_prompt = build_agent_system_prompt(&normalized_subagent_type, &model)?;
let allowed_tools = allowed_tools_for_subagent(&normalized_subagent_type); let allowed_tools = allowed_tools_for_subagent(&normalized_subagent_type);
let output_contents = format!( let output_contents = format!(
@@ -3623,13 +3764,14 @@ fn build_agent_runtime(
)) ))
} }
fn build_agent_system_prompt(subagent_type: &str) -> Result<Vec<String>, String> { fn build_agent_system_prompt(subagent_type: &str, model: &str) -> Result<Vec<String>, String> {
let cwd = std::env::current_dir().map_err(|error| error.to_string())?; let cwd = std::env::current_dir().map_err(|error| error.to_string())?;
let mut prompt = load_system_prompt( let mut prompt = load_system_prompt(
cwd, cwd,
DEFAULT_AGENT_SYSTEM_DATE.to_string(), DEFAULT_AGENT_SYSTEM_DATE.to_string(),
std::env::consts::OS, std::env::consts::OS,
"unknown", "unknown",
model_family_identity_for(model),
) )
.map_err(|error| error.to_string())?; .map_err(|error| error.to_string())?;
prompt.push(format!( prompt.push(format!(
@@ -4630,13 +4772,21 @@ async fn stream_with_provider(
let mut stream = client.stream_message(message_request).await?; let mut stream = client.stream_message(message_request).await?;
let mut events = Vec::new(); let mut events = Vec::new();
let mut pending_tools: BTreeMap<u32, (String, String, String)> = BTreeMap::new(); let mut pending_tools: BTreeMap<u32, (String, String, String)> = BTreeMap::new();
let mut pending_thinking: BTreeMap<u32, (String, Option<String>)> = BTreeMap::new();
let mut saw_stop = false; let mut saw_stop = false;
while let Some(event) = stream.next_event().await? { while let Some(event) = stream.next_event().await? {
match event { match event {
ApiStreamEvent::MessageStart(start) => { ApiStreamEvent::MessageStart(start) => {
for block in start.message.content { for block in start.message.content {
push_output_block(block, 0, &mut events, &mut pending_tools, true); push_output_block(
block,
0,
&mut events,
&mut pending_tools,
&mut pending_thinking,
true,
);
} }
} }
ApiStreamEvent::ContentBlockStart(start) => { ApiStreamEvent::ContentBlockStart(start) => {
@@ -4645,6 +4795,7 @@ async fn stream_with_provider(
start.index, start.index,
&mut events, &mut events,
&mut pending_tools, &mut pending_tools,
&mut pending_thinking,
true, true,
); );
} }
@@ -4659,10 +4810,26 @@ async fn stream_with_provider(
input.push_str(&partial_json); input.push_str(&partial_json);
} }
} }
ContentBlockDelta::ThinkingDelta { .. } ContentBlockDelta::ThinkingDelta { thinking } => {
| ContentBlockDelta::SignatureDelta { .. } => {} if let Some((pending, _)) = pending_thinking.get_mut(&delta.index) {
pending.push_str(&thinking);
}
}
ContentBlockDelta::SignatureDelta { signature } => {
if let Some((_, pending_signature)) = pending_thinking.get_mut(&delta.index) {
pending_signature
.get_or_insert_with(String::new)
.push_str(&signature);
}
}
}, },
ApiStreamEvent::ContentBlockStop(stop) => { ApiStreamEvent::ContentBlockStop(stop) => {
if let Some((thinking, signature)) = pending_thinking.remove(&stop.index) {
events.push(AssistantEvent::Thinking {
thinking,
signature,
});
}
if let Some((id, name, input)) = pending_tools.remove(&stop.index) { if let Some((id, name, input)) = pending_tools.remove(&stop.index) {
events.push(AssistantEvent::ToolUse { id, name, input }); events.push(AssistantEvent::ToolUse { id, name, input });
} }
@@ -4759,6 +4926,13 @@ fn convert_messages(messages: &[ConversationMessage]) -> Vec<InputMessage> {
.iter() .iter()
.map(|block| match block { .map(|block| match block {
ContentBlock::Text { text } => InputContentBlock::Text { text: text.clone() }, ContentBlock::Text { text } => InputContentBlock::Text { text: text.clone() },
ContentBlock::Thinking {
thinking,
signature,
} => InputContentBlock::Thinking {
thinking: thinking.clone(),
signature: signature.clone(),
},
ContentBlock::ToolUse { id, name, input } => InputContentBlock::ToolUse { ContentBlock::ToolUse { id, name, input } => InputContentBlock::ToolUse {
id: id.clone(), id: id.clone(),
name: name.clone(), name: name.clone(),
@@ -4778,6 +4952,9 @@ fn convert_messages(messages: &[ConversationMessage]) -> Vec<InputMessage> {
is_error: *is_error, is_error: *is_error,
}, },
}) })
.filter(
|block| !matches!(block, InputContentBlock::Text { text } if text.is_empty()),
)
.collect::<Vec<_>>(); .collect::<Vec<_>>();
(!content.is_empty()).then(|| InputMessage { (!content.is_empty()).then(|| InputMessage {
role: role.to_string(), role: role.to_string(),
@@ -4792,6 +4969,7 @@ fn push_output_block(
block_index: u32, block_index: u32,
events: &mut Vec<AssistantEvent>, events: &mut Vec<AssistantEvent>,
pending_tools: &mut BTreeMap<u32, (String, String, String)>, pending_tools: &mut BTreeMap<u32, (String, String, String)>,
pending_thinking: &mut BTreeMap<u32, (String, Option<String>)>,
streaming_tool_input: bool, streaming_tool_input: bool,
) { ) {
match block { match block {
@@ -4811,17 +4989,38 @@ fn push_output_block(
}; };
pending_tools.insert(block_index, (id, name, initial_input)); pending_tools.insert(block_index, (id, name, initial_input));
} }
OutputContentBlock::Thinking { .. } | OutputContentBlock::RedactedThinking { .. } => {} OutputContentBlock::Thinking {
thinking,
signature,
} => {
if streaming_tool_input {
pending_thinking.insert(block_index, (thinking, signature));
} else {
events.push(AssistantEvent::Thinking {
thinking,
signature,
});
}
}
OutputContentBlock::RedactedThinking { .. } => {}
} }
} }
fn response_to_events(response: MessageResponse) -> Vec<AssistantEvent> { fn response_to_events(response: MessageResponse) -> Vec<AssistantEvent> {
let mut events = Vec::new(); let mut events = Vec::new();
let mut pending_tools = BTreeMap::new(); let mut pending_tools = BTreeMap::new();
let mut pending_thinking = BTreeMap::new();
for (index, block) in response.content.into_iter().enumerate() { for (index, block) in response.content.into_iter().enumerate() {
let index = u32::try_from(index).expect("response block index overflow"); let index = u32::try_from(index).expect("response block index overflow");
push_output_block(block, index, &mut events, &mut pending_tools, false); push_output_block(
block,
index,
&mut events,
&mut pending_tools,
&mut pending_thinking,
false,
);
if let Some((id, name, input)) = pending_tools.remove(&index) { if let Some((id, name, input)) = pending_tools.remove(&index) {
events.push(AssistantEvent::ToolUse { id, name, input }); events.push(AssistantEvent::ToolUse { id, name, input });
} }
@@ -5924,8 +6123,7 @@ fn command_exists(command: &str) -> bool {
.arg("-lc") .arg("-lc")
.arg(format!("command -v {command} >/dev/null 2>&1")) .arg(format!("command -v {command} >/dev/null 2>&1"))
.status() .status()
.map(|status| status.success()) .is_ok_and(|status| status.success())
.unwrap_or(false)
} }
#[allow(clippy::too_many_lines)] #[allow(clippy::too_many_lines)]
@@ -6134,12 +6332,13 @@ mod tests {
use std::time::Duration; use std::time::Duration;
use super::{ use super::{
agent_permission_policy, allowed_tools_for_subagent, classify_lane_failure, agent_permission_policy, allowed_tools_for_subagent, build_agent_system_prompt,
derive_agent_state, execute_agent_with_spawn, execute_tool, extract_recovery_outcome, classify_lane_failure, derive_agent_state, execute_agent_with_spawn, execute_tool,
final_assistant_text, global_cron_registry, maybe_commit_provenance, mvp_tool_specs, extract_recovery_outcome, final_assistant_text, global_cron_registry,
permission_mode_from_plugin, persist_agent_terminal_state, push_output_block, maybe_commit_provenance, mvp_tool_specs, permission_mode_from_plugin,
run_task_packet, AgentInput, AgentJob, GlobalToolRegistry, LaneEventName, LaneFailureClass, persist_agent_terminal_state, push_output_block, run_task_packet, AgentInput, AgentJob,
ProviderRuntimeClient, SubagentToolExecutor, GlobalToolRegistry, LaneEventName, LaneFailureClass, ProviderRuntimeClient,
SubagentToolExecutor,
}; };
use api::OutputContentBlock; use api::OutputContentBlock;
use runtime::ProviderFallbackConfig; use runtime::ProviderFallbackConfig;
@@ -6369,6 +6568,45 @@ mod tests {
fs::remove_dir_all(&worktree).ok(); fs::remove_dir_all(&worktree).ok();
} }
#[test]
fn worker_create_merges_config_trusted_roots_with_per_call_roots() {
use std::fs;
let worktree = temp_path("config-and-call-trust-worktree");
let claw_dir = worktree.join(".claw");
fs::create_dir_all(&claw_dir).expect("create .claw dir");
fs::write(
claw_dir.join("settings.json"),
r#"{"trustedRoots": ["/definitely/not/this/worktree"]}"#,
)
.expect("write settings");
let cwd = worktree.to_str().expect("valid utf-8").to_string();
let parent = worktree
.parent()
.expect("temp path has parent")
.to_str()
.expect("valid parent utf-8")
.to_string();
let created = execute_tool(
"WorkerCreate",
&json!({
"cwd": cwd,
"trusted_roots": [parent]
}),
)
.expect("WorkerCreate should succeed");
let output: serde_json::Value = serde_json::from_str(&created).expect("json");
assert_eq!(
output["trust_auto_resolve"], true,
"per-call trusted_roots must extend config defaults for this create request"
);
fs::remove_dir_all(&worktree).ok();
}
#[test] #[test]
fn worker_terminate_sets_finished_status() { fn worker_terminate_sets_finished_status() {
// Create a worker in running state // Create a worker in running state
@@ -7148,10 +7386,103 @@ mod tests {
assert!(error.contains("relative URL without a base") || error.contains("empty host")); assert!(error.contains("relative URL without a base") || error.contains("empty host"));
} }
#[test]
fn web_search_decodes_absolute_duckduckgo_redirect_urls() {
// given
let _guard = env_lock()
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
let server = TestServer::spawn(Arc::new(|request_line: &str| {
assert!(request_line.contains("GET /search?q=duckduckgo+redirects "));
HttpResponse::html(
200,
"OK",
r#"
<html><body>
<a rel="nofollow" class="result__a" href="https://duckduckgo.com/l/?uddg=https%3A%2F%2Fdocs.rs%2Freqwest&amp;rut=abc">Reqwest docs</a>
</body></html>
"#,
)
}));
// when
std::env::set_var(
"CLAWD_WEB_SEARCH_BASE_URL",
format!("http://{}/search", server.addr()),
);
let result = execute_tool(
"WebSearch",
&json!({
"query": "duckduckgo redirects"
}),
)
.expect("WebSearch should succeed");
std::env::remove_var("CLAWD_WEB_SEARCH_BASE_URL");
// then
let output: serde_json::Value = serde_json::from_str(&result).expect("valid json");
let results = output["results"].as_array().expect("results array");
let search_result = results
.iter()
.find(|item| item.get("content").is_some())
.expect("search result block present");
let content = search_result["content"].as_array().expect("content array");
assert_eq!(content.len(), 1);
assert_eq!(content[0]["title"], "Reqwest docs");
assert_eq!(content[0]["url"], "https://docs.rs/reqwest");
}
#[test]
fn web_search_decodes_protocol_relative_duckduckgo_redirect_urls() {
// given
let _guard = env_lock()
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
let server = TestServer::spawn(Arc::new(|request_line: &str| {
assert!(request_line.contains("GET /search?q=duckduckgo+protocol+relative "));
HttpResponse::html(
200,
"OK",
r#"
<html><body>
<a rel="nofollow" class="result__a" href="//duckduckgo.com/l/?uddg=https%3A%2F%2Fdocs.rs%2Ftokio&amp;rut=xyz">Tokio Docs</a>
</body></html>
"#,
)
}));
// when
std::env::set_var(
"CLAWD_WEB_SEARCH_BASE_URL",
format!("http://{}/search", server.addr()),
);
let result = execute_tool(
"WebSearch",
&json!({
"query": "duckduckgo protocol relative"
}),
)
.expect("WebSearch should succeed");
std::env::remove_var("CLAWD_WEB_SEARCH_BASE_URL");
// then
let output: serde_json::Value = serde_json::from_str(&result).expect("valid json");
let results = output["results"].as_array().expect("results array");
let search_result = results
.iter()
.find(|item| item.get("content").is_some())
.expect("search result block present");
let content = search_result["content"].as_array().expect("content array");
assert_eq!(content.len(), 1);
assert_eq!(content[0]["title"], "Tokio Docs");
assert_eq!(content[0]["url"], "https://docs.rs/tokio");
}
#[test] #[test]
fn pending_tools_preserve_multiple_streaming_tool_calls_by_index() { fn pending_tools_preserve_multiple_streaming_tool_calls_by_index() {
let mut events = Vec::new(); let mut events = Vec::new();
let mut pending_tools = BTreeMap::new(); let mut pending_tools = BTreeMap::new();
let mut pending_thinking = BTreeMap::new();
push_output_block( push_output_block(
OutputContentBlock::ToolUse { OutputContentBlock::ToolUse {
@@ -7162,6 +7493,7 @@ mod tests {
1, 1,
&mut events, &mut events,
&mut pending_tools, &mut pending_tools,
&mut pending_thinking,
true, true,
); );
push_output_block( push_output_block(
@@ -7173,6 +7505,7 @@ mod tests {
2, 2,
&mut events, &mut events,
&mut pending_tools, &mut pending_tools,
&mut pending_thinking,
true, true,
); );
@@ -8409,6 +8742,28 @@ mod tests {
assert!(!verification.contains("write_file")); assert!(!verification.contains("write_file"));
} }
#[test]
fn subagent_system_prompt_uses_resolved_model_identity() {
// given: a temporary workspace and an OpenAI-compatible subagent model
let _guard = env_guard();
let root = temp_path("subagent-prompt-identity");
fs::create_dir_all(&root).expect("create temp workspace");
let previous = std::env::current_dir().expect("current dir");
std::env::set_current_dir(&root).expect("enter temp workspace");
// when: building the subagent system prompt
let prompt = build_agent_system_prompt("Explore", "openai/gpt-4.1-mini")
.expect("subagent system prompt should build")
.join("\n");
std::env::set_current_dir(previous).expect("restore current dir");
// then: the prompt renders a generic model family identity
assert!(prompt.contains("Model family: an AI assistant"));
assert!(!prompt.contains("Model family: Claude Opus 4.6"));
fs::remove_dir_all(root).expect("cleanup temp workspace");
}
#[derive(Debug)] #[derive(Debug)]
struct MockSubagentApiClient { struct MockSubagentApiClient {
calls: usize, calls: usize,
@@ -8447,8 +8802,12 @@ mod tests {
let _guard = env_lock() let _guard = env_lock()
.lock() .lock()
.unwrap_or_else(std::sync::PoisonError::into_inner); .unwrap_or_else(std::sync::PoisonError::into_inner);
let path = temp_path("subagent-input.txt"); let root = temp_path("subagent-runtime");
std::fs::create_dir_all(&root).expect("create root");
let path = root.join("subagent-input.txt");
std::fs::write(&path, "hello from child").expect("write input file"); std::fs::write(&path, "hello from child").expect("write input file");
let original_dir = std::env::current_dir().expect("cwd");
std::env::set_current_dir(&root).expect("set cwd");
let mut runtime = ConversationRuntime::new( let mut runtime = ConversationRuntime::new(
Session::new(), Session::new(),
@@ -8480,7 +8839,8 @@ mod tests {
if output.contains("hello from child") if output.contains("hello from child")
))); )));
let _ = std::fs::remove_file(path); std::env::set_current_dir(&original_dir).expect("restore cwd");
let _ = std::fs::remove_dir_all(root);
} }
#[test] #[test]
@@ -8934,6 +9294,78 @@ mod tests {
let _ = fs::remove_dir_all(root); let _ = fs::remove_dir_all(root);
} }
#[test]
fn file_tools_reject_paths_outside_current_workspace() {
let _guard = env_lock()
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
let root = temp_path("workspace-scope");
let outside = temp_path("workspace-scope-outside");
fs::create_dir_all(&root).expect("create root");
fs::create_dir_all(&outside).expect("create outside");
fs::write(outside.join("secret.txt"), "secret\n").expect("outside fixture");
let original_dir = std::env::current_dir().expect("cwd");
std::env::set_current_dir(&root).expect("set cwd");
let read_error = execute_tool(
"read_file",
&json!({ "path": outside.join("secret.txt").display().to_string() }),
)
.expect_err("read outside workspace should fail");
assert!(read_error.contains("escapes workspace"));
let write_error = execute_tool(
"write_file",
&json!({ "path": outside.join("created.txt").display().to_string(), "content": "nope" }),
)
.expect_err("write outside workspace should fail");
assert!(write_error.contains("escapes workspace"));
assert!(!outside.join("created.txt").exists());
let glob_error = execute_tool(
"glob_search",
&json!({ "pattern": outside.join("*.txt").display().to_string() }),
)
.expect_err("absolute glob outside workspace should fail");
assert!(glob_error.contains("escapes workspace"));
let grep_error = execute_tool(
"grep_search",
&json!({ "pattern": "secret", "path": outside.display().to_string() }),
)
.expect_err("grep outside workspace should fail");
assert!(grep_error.contains("escapes workspace"));
std::env::set_current_dir(&original_dir).expect("restore cwd");
let _ = fs::remove_dir_all(root);
let _ = fs::remove_dir_all(outside);
}
#[test]
#[cfg(unix)]
fn file_tools_reject_symlink_escape_from_current_workspace() {
let _guard = env_lock()
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
let root = temp_path("workspace-symlink-scope");
let outside = temp_path("workspace-symlink-outside");
fs::create_dir_all(&root).expect("create root");
fs::create_dir_all(&outside).expect("create outside");
fs::write(outside.join("secret.txt"), "secret\n").expect("outside fixture");
std::os::unix::fs::symlink(outside.join("secret.txt"), root.join("link.txt"))
.expect("create symlink");
let original_dir = std::env::current_dir().expect("cwd");
std::env::set_current_dir(&root).expect("set cwd");
let error = execute_tool("read_file", &json!({ "path": "link.txt" }))
.expect_err("symlink outside workspace should fail");
assert!(error.contains("escapes workspace"));
std::env::set_current_dir(&original_dir).expect("restore cwd");
let _ = fs::remove_dir_all(root);
let _ = fs::remove_dir_all(outside);
}
#[test] #[test]
fn sleep_waits_and_reports_duration() { fn sleep_waits_and_reports_duration() {
let started = std::time::Instant::now(); let started = std::time::Instant::now();
@@ -9347,6 +9779,19 @@ printf 'pwsh:%s' "$1"
registry registry
} }
fn workspace_write_registry() -> super::GlobalToolRegistry {
use runtime::permission_enforcer::PermissionEnforcer;
use runtime::PermissionPolicy;
let policy = mvp_tool_specs().into_iter().fold(
PermissionPolicy::new(runtime::PermissionMode::WorkspaceWrite),
|policy, spec| policy.with_tool_requirement(spec.name, spec.required_permission),
);
let mut registry = super::GlobalToolRegistry::builtin();
registry.set_enforcer(PermissionEnforcer::new(policy));
registry
}
#[test] #[test]
fn given_read_only_enforcer_when_bash_then_denied() { fn given_read_only_enforcer_when_bash_then_denied() {
let registry = read_only_registry(); let registry = read_only_registry();
@@ -9360,6 +9805,63 @@ printf 'pwsh:%s' "$1"
); );
} }
#[test]
fn given_workspace_write_enforcer_when_bash_uses_shell_expansion_then_denied() {
let registry = workspace_write_registry();
let err = registry
.execute("bash", &json!({ "command": "cat $HOME/.ssh/config" }))
.expect_err("shell-expanded path should require elevated permission");
assert!(
err.contains("requires 'danger-full-access'"),
"should require elevated mode: {err}"
);
}
#[test]
fn given_workspace_write_enforcer_when_bash_uses_windows_absolute_path_then_denied() {
let registry = workspace_write_registry();
let err = registry
.execute(
"bash",
&json!({ "command": r"cat C:\\Users\\alice\\.ssh\\config" }),
)
.expect_err("Windows absolute path should require elevated permission");
assert!(
err.contains("requires 'danger-full-access'"),
"should require elevated mode: {err}"
);
}
#[test]
#[cfg(unix)]
fn given_workspace_write_enforcer_when_bash_reads_symlink_escape_then_denied() {
let _guard = env_lock()
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
let root = temp_path("bash-symlink-scope");
let outside = temp_path("bash-symlink-outside");
fs::create_dir_all(&root).expect("create root");
fs::create_dir_all(&outside).expect("create outside");
fs::write(outside.join("secret.txt"), "secret\n").expect("outside fixture");
std::os::unix::fs::symlink(outside.join("secret.txt"), root.join("link.txt"))
.expect("create symlink");
let original_dir = std::env::current_dir().expect("cwd");
std::env::set_current_dir(&root).expect("set cwd");
let registry = workspace_write_registry();
let err = registry
.execute("bash", &json!({ "command": "cat link.txt" }))
.expect_err("symlink escape should require elevated permission");
assert!(
err.contains("requires 'danger-full-access'"),
"should require elevated mode: {err}"
);
std::env::set_current_dir(&original_dir).expect("restore cwd");
let _ = fs::remove_dir_all(root);
let _ = fs::remove_dir_all(outside);
}
#[test] #[test]
fn given_read_only_enforcer_when_write_file_then_denied() { fn given_read_only_enforcer_when_write_file_then_denied() {
let registry = read_only_registry(); let registry = read_only_registry();
@@ -9399,11 +9901,14 @@ printf 'pwsh:%s' "$1"
fs::create_dir_all(&root).expect("create root"); fs::create_dir_all(&root).expect("create root");
let file = root.join("readable.txt"); let file = root.join("readable.txt");
fs::write(&file, "content\n").expect("write test file"); fs::write(&file, "content\n").expect("write test file");
let original_dir = std::env::current_dir().expect("cwd");
std::env::set_current_dir(&root).expect("set cwd");
let registry = read_only_registry(); let registry = read_only_registry();
let result = registry.execute("read_file", &json!({ "path": file.display().to_string() })); let result = registry.execute("read_file", &json!({ "path": file.display().to_string() }));
assert!(result.is_ok(), "read_file should be allowed: {result:?}"); assert!(result.is_ok(), "read_file should be allowed: {result:?}");
std::env::set_current_dir(&original_dir).expect("restore cwd");
let _ = fs::remove_dir_all(root); let _ = fs::remove_dir_all(root);
} }

View File

@@ -0,0 +1,205 @@
use runtime::{permission_enforcer::PermissionEnforcer, PermissionMode, PermissionPolicy};
use serde_json::json;
use std::fs;
use std::path::{Path, PathBuf};
use std::sync::{Mutex, OnceLock};
use tools::{mvp_tool_specs, GlobalToolRegistry};
fn env_lock() -> &'static Mutex<()> {
static LOCK: OnceLock<Mutex<()>> = OnceLock::new();
LOCK.get_or_init(|| Mutex::new(()))
}
fn temp_path(name: &str) -> PathBuf {
let unique = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.expect("time")
.as_nanos();
std::env::temp_dir().join(format!("claw-path-scope-{unique}-{name}"))
}
fn workspace_write_registry() -> GlobalToolRegistry {
let policy = mvp_tool_specs().into_iter().fold(
PermissionPolicy::new(PermissionMode::WorkspaceWrite),
|policy, spec| policy.with_tool_requirement(spec.name, spec.required_permission),
);
GlobalToolRegistry::builtin().with_enforcer(PermissionEnforcer::new(policy))
}
fn run_bash(command: &str) -> Result<String, String> {
workspace_write_registry().execute("bash", &json!({ "command": command }))
}
fn run_powershell(command: &str) -> Result<String, String> {
workspace_write_registry().execute("PowerShell", &json!({ "command": command }))
}
fn run_read_file(path: &Path) -> Result<String, String> {
workspace_write_registry().execute("read_file", &json!({ "path": path.display().to_string() }))
}
fn assert_permission_denied(result: Result<String, String>, case_name: &str) {
let err = result
.unwrap_err_or_else(|ok| panic!("{case_name} should be denied before execution, got {ok}"));
assert!(
(err.contains("requires danger-full-access permission")
|| err.contains("requires \'danger-full-access\' permission"))
|| err.contains("current mode is workspace-write")
|| err.contains("escapes workspace"),
"{case_name} should fail in permission enforcement, got: {err}"
);
}
trait UnwrapErrOrElse<T, E> {
fn unwrap_err_or_else<F: FnOnce(T) -> E>(self, op: F) -> E;
}
impl<T, E> UnwrapErrOrElse<T, E> for Result<T, E> {
fn unwrap_err_or_else<F: FnOnce(T) -> E>(self, op: F) -> E {
match self {
Ok(value) => op(value),
Err(error) => error,
}
}
}
fn with_cwd<T>(cwd: &Path, f: impl FnOnce() -> T) -> T {
let previous = std::env::current_dir().expect("current dir");
std::env::set_current_dir(cwd).expect("set cwd");
let result = f();
std::env::set_current_dir(previous).expect("restore cwd");
result
}
#[test]
fn direct_paths_allow_workspace_file_and_deny_absolute_outside_file() {
let _guard = env_lock()
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
let root = temp_path("direct");
fs::create_dir_all(root.join("src")).expect("create workspace");
fs::write(root.join("src/lib.rs"), "workspace\n").expect("write workspace file");
let outside = temp_path("direct-outside.txt");
fs::write(&outside, "secret\n").expect("write outside file");
with_cwd(&root, || {
let allowed = run_bash("cat src/lib.rs").expect("workspace-relative read should execute");
assert!(allowed.contains("workspace"));
assert_permission_denied(
run_bash(&format!("cat {}", outside.display())),
"absolute outside file",
);
});
let _ = fs::remove_dir_all(root);
let _ = fs::remove_file(outside);
}
#[test]
fn file_tool_direct_outside_path_is_denied_before_reading() {
let _guard = env_lock()
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
let root = temp_path("file-tool-direct");
fs::create_dir_all(&root).expect("create workspace");
let outside = temp_path("file-tool-secret.txt");
fs::write(&outside, "secret\n").expect("write outside file");
with_cwd(&root, || {
assert_permission_denied(run_read_file(&outside), "read_file outside workspace");
});
let _ = fs::remove_dir_all(root);
let _ = fs::remove_file(outside);
}
#[cfg(unix)]
#[test]
fn symlink_resolving_outside_workspace_is_denied_before_execution() {
let _guard = env_lock()
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
let root = temp_path("symlink");
fs::create_dir_all(&root).expect("create workspace");
let outside = temp_path("symlink-secret.txt");
fs::write(&outside, "secret\n").expect("write outside file");
std::os::unix::fs::symlink(&outside, root.join("secret-link")).expect("create symlink");
with_cwd(&root, || {
assert_permission_denied(run_bash("cat secret-link"), "outside symlink");
});
let _ = fs::remove_dir_all(root);
let _ = fs::remove_file(outside);
}
#[test]
fn shell_expansion_and_glob_parent_traversal_are_denied_before_execution() {
let _guard = env_lock()
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
let root = temp_path("expansion");
fs::create_dir_all(&root).expect("create workspace");
with_cwd(&root, || {
for (name, command) in [
("parent glob", "ls ../*"),
("PWD parent expansion", "cat $PWD/../secret.txt"),
("braced PWD parent expansion", "cat ${PWD}/../secret.txt"),
(
"command substitution parent expansion",
"cat $(pwd)/../secret.txt",
),
] {
assert_permission_denied(run_bash(command), name);
}
});
let _ = fs::remove_dir_all(root);
}
#[test]
fn nested_worktree_paths_are_allowed_but_parent_escape_is_denied() {
let _guard = env_lock()
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
let root = temp_path("worktree");
let worktree = root.join("main").join("linked-worktree");
fs::create_dir_all(worktree.join("src")).expect("create worktree");
fs::write(worktree.join("src/lib.rs"), "worktree\n").expect("write worktree file");
with_cwd(&worktree, || {
let allowed =
run_bash("cat src/lib.rs").expect("nested worktree-relative read should execute");
assert!(allowed.contains("worktree"));
assert_permission_denied(run_bash("cat ../../outside.txt"), "worktree parent escape");
});
let _ = fs::remove_dir_all(root);
}
#[test]
fn windows_style_absolute_paths_are_denied_before_execution() {
for (name, command) in [
(
"windows drive backslash",
r"cat C:\Users\attacker\secret.txt",
),
("windows drive slash", r"cat C:/Users/attacker/secret.txt"),
] {
assert_permission_denied(run_bash(command), name);
}
for (name, command) in [
(
"powershell windows drive backslash",
r"Get-Content -Path C:\Users\attacker\secret.txt",
),
(
"powershell windows drive slash",
r"Get-Content -Path C:/Users/attacker/secret.txt",
),
] {
assert_permission_denied(run_powershell(command), name);
}
}

54
scripts/cc2_board.py Executable file
View File

@@ -0,0 +1,54 @@
#!/usr/bin/env python3
"""Canonical CC2 board command wrapper.
This script intentionally delegates to the richer G001 board generator,
validator, and Markdown renderer so all entrypoints enforce the same schema.
"""
from __future__ import annotations
import argparse
import subprocess
import sys
from pathlib import Path
def run(cmd: list[str], cwd: Path) -> int:
return subprocess.run(cmd, cwd=str(cwd)).returncode
def main(argv: list[str] | None = None) -> int:
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("command", choices=["generate", "validate"])
parser.add_argument("--repo-root", type=Path, default=Path.cwd(), help="repository root containing ROADMAP.md")
parser.add_argument("--context-root", type=Path, default=None, help="accepted for compatibility; source .omx is auto-detected by the generator")
parser.add_argument("--board-json", default=".omx/cc2/board.json")
parser.add_argument("--board-md", default=".omx/cc2/board.md")
args = parser.parse_args(argv)
repo_root = args.repo_root.resolve()
board_json = repo_root / args.board_json
board_md = repo_root / args.board_md
generator = repo_root / "scripts" / "generate_cc2_board.py"
validator = repo_root / "scripts" / "validate_cc2_board.py"
renderer = repo_root / ".omx" / "cc2" / "render_board_md.py"
if args.command == "generate":
rc = run([sys.executable, str(generator), "--repo-root", str(repo_root), "--out-dir", str(board_json.parent)], repo_root)
if rc:
return rc
return run([sys.executable, str(renderer), str(board_json), str(board_md)], repo_root)
checks = [
[sys.executable, str(validator), "--repo-root", str(repo_root), "--board", str(board_json)],
[sys.executable, str(renderer), str(board_json), str(board_md), "--check"],
]
for cmd in checks:
rc = run(cmd, repo_root)
if rc:
return rc
print(f"CC2 board validation PASS: {board_json} and {board_md} are canonical and in sync")
return 0
if __name__ == "__main__":
raise SystemExit(main())

525
scripts/generate_cc2_board.py Executable file
View File

@@ -0,0 +1,525 @@
#!/usr/bin/env python3
"""Generate the canonical Claw Code 2.0 execution board from frozen roadmap evidence."""
from __future__ import annotations
import argparse
import hashlib
import json
import re
import subprocess
import sys
from dataclasses import dataclass
from datetime import datetime, timezone
from pathlib import Path
from typing import Any
REQUIRED_ITEM_FIELDS = [
"id",
"title",
"source_anchor",
"source_type",
"release_bucket",
"status",
"dependencies",
"verification_required",
"deferral_rationale",
]
STATUSES = {
"context",
"active",
"open",
"done_verify",
"stale_done",
"superseded",
"deferred_with_rationale",
"rejected_not_claw",
}
RELEASE_BUCKETS = {
"alpha_blocker",
"beta_adoption",
"ga_ecosystem",
"post_2_0_research",
"rejected_not_claw",
"context",
"2.x_intake",
}
STRUCTURAL_HEADINGS = {
"Clawable Coding Harness Roadmap",
"Goal",
'Definition of "clawable"',
"Current Pain Points",
"Product Principles",
"Roadmap",
"Immediate Backlog (from current real pain)",
"Deployment Architecture Gap (filed from dogfood 2026-04-08)",
"Startup Friction Gap: No Default trusted_roots in Settings (filed 2026-04-08)",
"Observability Transport Decision (filed 2026-04-08)",
"Provider Routing: Model-Name Prefix Must Win Over Env-Var Presence (fixed 2026-04-08, `0530c50`)",
}
CATEGORY_KEYWORDS = [
("security", ["security", "sandbox", "permission", "trust", "approval-token", "denied"]),
("windows_install", ["windows", "install", "path", "release", "binary", "container"]),
("provider", ["provider", "model", "openai", "anthropic", "ollama", "llama", "vllm", "credential"]),
("sessions", ["session", "resume", "compact", "context-window", "thread"]),
("docs_license", ["docs", "readme", "usage", "license", "help", "onboarding"]),
("ide_acp", ["zed", "acp", "editor", "daemon"]),
("plugin_mcp", ["plugin", "mcp", "marketplace", "server"]),
("event_report", ["event", "report", "schema", "projection", "redaction", "clawhip", "lane"]),
("branch_recovery", ["branch", "stale", "recovery", "green", "flake"]),
("boot", ["boot", "worker", "startup", "ready", "prompt"]),
("task_policy", ["task", "policy", "claw-native", "dashboard", "lane board"]),
("ux_tui", ["tui", "statusline", "keymap", "clickable", "copy", "paste"]),
("anti_slop", ["spam", "slop", "issue hygiene", "bot"]),
]
@dataclass(frozen=True)
class RoadmapRecord:
line: int
level: int
title: str
path: str
source_type: str
ordinal: int | None = None
def sha256_prefix(path: Path, length: int = 16) -> str:
return hashlib.sha256(path.read_bytes()).hexdigest()[:length]
def slugify(text: str, limit: int = 54) -> str:
slug = re.sub(r"[^a-z0-9]+", "-", text.lower()).strip("-")
return slug[:limit].strip("-") or "item"
def find_source_omx(repo_root: Path) -> Path:
candidates = []
env = None
try:
import os
env = os.environ.get("CC2_SOURCE_OMX")
except Exception:
env = None
if env:
candidates.append(Path(env).expanduser())
candidates.append(repo_root / ".omx")
candidates.extend(parent / ".omx" for parent in repo_root.parents)
for candidate in candidates:
if (candidate / "plans" / "claw-code-2-0-adaptive-plan.md").exists() and (candidate / "research").exists():
return candidate
raise FileNotFoundError("could not locate source .omx with plans/claw-code-2-0-adaptive-plan.md and research/")
def parse_roadmap(path: Path) -> tuple[list[RoadmapRecord], list[RoadmapRecord]]:
headings: list[RoadmapRecord] = []
actions: list[RoadmapRecord] = []
stack: list[tuple[str, int, int]] = []
for line_no, line in enumerate(path.read_text(encoding="utf-8").splitlines(), 1):
heading = re.match(r"^(#{1,6})\s+(.*?)(?:\s+#+)?\s*$", line)
if heading:
level = len(heading.group(1))
title = heading.group(2).strip()
stack = [entry for entry in stack if entry[1] < level] + [(title, level, line_no)]
headings.append(RoadmapRecord(line_no, level, title, " > ".join(entry[0] for entry in stack), "roadmap_heading"))
continue
ordered = re.match(r"^(\s*)(\d+)\.\s+(.+?)\s*$", line)
if ordered and len(ordered.group(1)) <= 4:
title = ordered.group(3).strip()
if len(title) > 10:
actions.append(
RoadmapRecord(
line_no,
len(stack[-1][0]) if stack else 0,
title,
" > ".join(entry[0] for entry in stack),
"roadmap_action",
int(ordered.group(2)),
)
)
return headings, actions
def category_for(text: str) -> str:
lower = text.lower()
for category, needles in CATEGORY_KEYWORDS:
if any(needle in lower for needle in needles):
return category
return "governance"
def stream_for(record: RoadmapRecord) -> str:
title = record.title.lower()
path = record.path.lower()
combined = f"{path} {title}"
if "phase 1" in combined or category_for(combined) == "boot":
return "stream_1_worker_boot_session_control"
if "phase 2" in combined or category_for(combined) == "event_report":
return "stream_2_event_reporting_contracts"
if "phase 3" in combined or category_for(combined) == "branch_recovery":
return "stream_3_branch_test_recovery"
if "phase 4" in combined or category_for(combined) == "task_policy":
return "stream_4_claws_first_execution"
if "phase 5" in combined or category_for(combined) == "plugin_mcp":
return "stream_5_plugin_mcp_lifecycle"
if any(k in combined for k in ["windows", "install", "provider", "docs", "license", "session hygiene", "compact"]):
return "adoption_overlay"
if any(k in combined for k in ["zed", "acp", "desktop", "marketplace", "package"]):
return "parity_overlay"
return "stream_0_governance"
def release_bucket_for(record: RoadmapRecord, status: str) -> str:
combined = f"{record.path} {record.title}".lower()
category = category_for(combined)
if status == "context":
return "context"
if status == "rejected_not_claw":
return "rejected_not_claw"
if any(k in combined for k in ["phase 1", "phase 2", "phase 3", "phase 4", "p0", "p1", "security", "sandbox", "trust", "worker", "event", "branch freshness"]):
return "alpha_blocker"
if category in {"windows_install", "provider", "sessions", "docs_license", "anti_slop"}:
return "beta_adoption"
if category in {"plugin_mcp", "ide_acp", "ux_tui"}:
return "ga_ecosystem"
if any(k in combined for k in ["desktop", "share", "cloud", "research", "post-2.0", "future"]):
return "post_2_0_research"
if "pinpoint" in combined:
return "alpha_blocker"
return "beta_adoption"
def status_for(record: RoadmapRecord) -> str:
title = record.title
combined = f"{record.path} {title}".lower()
if record.source_type == "roadmap_heading" and (record.level <= 2 or title in STRUCTURAL_HEADINGS):
# Phase headings are active work containers; other h1/h2 prose headings are context unless fixed/deferred wording says otherwise.
if title.startswith("Phase "):
return "active"
if "pinpoint" not in title.lower() and not any(word in combined for word in ["gap", "routing"]):
return "context"
if any(word in combined for word in ["rejected_not_claw", "not claw", "outside claw"]):
return "rejected_not_claw"
if "superseded" in combined:
return "superseded"
if "deferred" in combined or "post-2.0" in combined or "post_2_0" in combined:
return "deferred_with_rationale"
if any(word in combined for word in ["done", "implemented", "fixed", "verified", "re-verified", "landed", "green"]):
if any(word in combined for word in ["stale", "old filing", "original filing below", "no longer reproduces"]):
return "stale_done"
return "done_verify"
if title.lower().startswith(("evidence for", "trace path", "actual root cause", "meta-lesson")):
return "context"
return "open" if "pinpoint" in combined or record.source_type == "roadmap_action" else "active"
def deferral_for(record: RoadmapRecord, status: str) -> str:
if status == "deferred_with_rationale":
return "Deferred by roadmap/approved plan until prerequisite contracts or post-2.0 research admission gates are satisfied."
if status == "rejected_not_claw":
return "Rejected because the source describes clone-only breadth or behavior outside Claw's machine-truth/clawable-harness identity."
if status == "superseded":
return "Superseded by a newer roadmap entry or canonical Rust/control-plane contract; keep only for audit traceability."
if status == "stale_done":
return "Marked done in roadmap but needs freshness re-verification before being used as release evidence."
return ""
def verification_for(record: RoadmapRecord, status: str) -> str:
if status == "context":
return "none_context_only"
if status in {"done_verify", "stale_done"}:
return "verify_existing_evidence_and_regression_guard"
cat = category_for(f"{record.path} {record.title}")
if cat == "docs_license":
return "docs_snapshot_or_help_output_check"
if cat == "windows_install":
return "install_matrix_or_cross_platform_smoke"
if cat == "provider":
return "provider_routing_contract_test"
if cat == "plugin_mcp":
return "plugin_mcp_lifecycle_contract_test"
if cat == "event_report":
return "schema_golden_fixture_or_consumer_contract_test"
if cat == "branch_recovery":
return "git_fixture_or_recovery_recipe_test"
if cat == "boot":
return "worker_boot_state_machine_or_cli_json_contract_test"
return "targeted_regression_or_acceptance_test_required"
def dependencies_for(record: RoadmapRecord, status: str) -> list[str]:
combined = f"{record.path} {record.title}".lower()
deps: list[str] = []
if status == "context":
return deps
if "phase 2" in combined or category_for(combined) == "event_report":
deps.append("stream_1_worker_boot_session_control")
if "phase 3" in combined or category_for(combined) == "branch_recovery":
deps.append("stream_2_event_reporting_contracts")
if "phase 4" in combined or category_for(combined) == "task_policy":
deps.append("stream_2_event_reporting_contracts")
if "phase 5" in combined or category_for(combined) == "plugin_mcp":
deps.append("stream_1_worker_boot_session_control")
if any(k in combined for k in ["zed", "acp", "desktop", "marketplace"]):
deps.append("stable_alpha_contracts")
if any(k in combined for k in ["provider", "install", "windows", "docs", "license"]):
deps.append("adoption_overlay_triage")
return sorted(set(deps))
def roadmap_item(record: RoadmapRecord, index: int) -> dict[str, Any]:
status = status_for(record)
item_id = f"CC2-RM-{'H' if record.source_type == 'roadmap_heading' else 'A'}{index:04d}-{slugify(record.title, 40)}"
bucket = release_bucket_for(record, status)
return {
"id": item_id,
"title": record.title,
"source_anchor": f"ROADMAP.md:L{record.line}",
"source_type": record.source_type,
"source_path": "ROADMAP.md",
"source_context": record.path,
"source_line": record.line,
"source_level": record.level if record.source_type == "roadmap_heading" else None,
"source_ordinal": record.ordinal,
"release_bucket": bucket,
"lifecycle_status": status,
"status": status,
"category": category_for(f"{record.path} {record.title}"),
"owner_lane": stream_for(record),
"dependencies": dependencies_for(record, status),
"verification_required": verification_for(record, status),
"deferral_rationale": deferral_for(record, status),
}
def load_json(path: Path) -> Any:
return json.loads(path.read_text(encoding="utf-8"))
def issue_item(issue: dict[str, Any], source_name: str, source_type: str, bucket: str) -> dict[str, Any]:
title = issue.get("title") or f"Issue #{issue.get('number')}"
number = issue.get("number")
body = f"{title} {issue.get('body') or ''}"
status = "open" if issue.get("state", "OPEN").lower() != "closed" else "done_verify"
return {
"id": f"CC2-ISSUE-{source_name.upper()}-{number}",
"title": title,
"source_anchor": f".omx/research/{source_name}.json#issue-{number}",
"source_type": source_type,
"source_path": f".omx/research/{source_name}.json",
"issue_number": number,
"issue_url": issue.get("url"),
"release_bucket": bucket,
"lifecycle_status": status,
"status": status,
"category": category_for(body),
"owner_lane": stream_for(RoadmapRecord(0, 0, title, title, source_type)),
"dependencies": ["roadmap_board_triage"],
"verification_required": "issue_acceptance_repro_or_triage_decision",
"deferral_rationale": "Latest issue intake is admitted only when it matches freeze/admission rules; otherwise remains 2.x_intake." if bucket == "2.x_intake" else "",
}
def repo_context_item(meta: dict[str, Any], source_name: str) -> dict[str, Any]:
owner = meta.get("nameWithOwner", source_name)
return {
"id": f"CC2-PARITY-{source_name.upper()}-REPO-CONTEXT",
"title": f"Parity source metadata: {owner}",
"source_anchor": f".omx/research/{source_name}-repo.json",
"source_type": "parity_repo_context",
"source_path": f".omx/research/{source_name}-repo.json",
"release_bucket": "context",
"lifecycle_status": "context",
"status": "context",
"category": "governance",
"owner_lane": "parity_overlay",
"dependencies": [],
"verification_required": "none_context_only",
"deferral_rationale": "",
"repo": {
"nameWithOwner": owner,
"url": meta.get("url"),
"pushedAt": meta.get("pushedAt"),
"latestRelease": meta.get("latestRelease"),
"licenseInfo": meta.get("licenseInfo"),
},
}
def summarize_counts(items: list[dict[str, Any]], key: str) -> dict[str, int]:
out: dict[str, int] = {}
for item in items:
out[item[key]] = out.get(item[key], 0) + 1
return dict(sorted(out.items()))
def render_markdown(board: dict[str, Any]) -> str:
lines = [
"# Claw Code 2.0 Canonical Board",
"",
f"Generated: `{board['generated_at']}`",
f"Roadmap SHA-256 prefix: `{board['sources']['roadmap']['sha256_prefix']}`",
"",
"## Summary",
"",
f"- Total items: **{len(board['items'])}**",
f"- Roadmap headings covered: **{board['coverage']['roadmap_headings_total']} / {board['coverage']['roadmap_headings_mapped']}**",
f"- Roadmap ordered actions covered: **{board['coverage']['roadmap_actions_total']} / {board['coverage']['roadmap_actions_mapped']}**",
"",
"### By lifecycle status",
"",
]
for status, count in board["summary"]["by_status"].items():
lines.append(f"- `{status}`: {count}")
lines.extend(["", "### By release bucket", ""])
for bucket, count in board["summary"]["by_release_bucket"].items():
lines.append(f"- `{bucket}`: {count}")
lines.extend(["", "## Board Items", ""])
for item in board["items"]:
deps = ", ".join(item.get("dependencies") or []) or "none"
rationale = item.get("deferral_rationale") or ""
lines.extend([
f"### {item['id']}",
f"- Title: {item['title']}",
f"- Source: `{item['source_anchor']}` (`{item['source_type']}`)",
f"- Bucket/status: `{item['release_bucket']}` / `{item['status']}`",
f"- Category/lane: `{item.get('category')}` / `{item.get('owner_lane')}`",
f"- Dependencies: {deps}",
f"- Verification: `{item['verification_required']}`",
f"- Deferral rationale: {rationale}",
"",
])
return "\n".join(lines)
def validate_board(board: dict[str, Any]) -> list[str]:
errors: list[str] = []
seen = set()
for index, item in enumerate(board.get("items", []), 1):
missing = [field for field in REQUIRED_ITEM_FIELDS if field not in item]
if missing:
errors.append(f"item {index} missing fields: {missing}")
if item.get("id") in seen:
errors.append(f"duplicate id: {item.get('id')}")
seen.add(item.get("id"))
if item.get("status") not in STATUSES:
errors.append(f"{item.get('id')} invalid status {item.get('status')}")
if item.get("release_bucket") not in RELEASE_BUCKETS:
errors.append(f"{item.get('id')} invalid release_bucket {item.get('release_bucket')}")
if not isinstance(item.get("dependencies"), list):
errors.append(f"{item.get('id')} dependencies must be list")
coverage = board.get("coverage", {})
if coverage.get("unmapped_roadmap_heading_lines"):
errors.append(f"unmapped heading lines: {coverage['unmapped_roadmap_heading_lines']}")
if coverage.get("duplicate_roadmap_heading_lines"):
errors.append(f"duplicate heading lines: {coverage['duplicate_roadmap_heading_lines']}")
if coverage.get("roadmap_headings_total") != coverage.get("roadmap_headings_mapped"):
errors.append("roadmap heading total/mapped mismatch")
return errors
def build_board(repo_root: Path) -> dict[str, Any]:
roadmap_path = repo_root / "ROADMAP.md"
source_omx = find_source_omx(repo_root)
research = source_omx / "research"
plan_path = source_omx / "plans" / "claw-code-2-0-adaptive-plan.md"
headings, actions = parse_roadmap(roadmap_path)
items = [roadmap_item(record, i) for i, record in enumerate(headings, 1)]
items.extend(roadmap_item(record, i) for i, record in enumerate(actions, 1))
latest_issues = load_json(research / "claw-open-latest.json")
all_issues = load_json(research / "claw-issues.json")
items.extend(issue_item(issue, "claw-open-latest", "latest_open_issue", "2.x_intake") for issue in latest_issues)
# Include a small real-issue sample from the full freeze to keep the board tied to the larger issue manifest without exploding scope.
for issue in all_issues[:50]:
title_body = f"{issue.get('title','')} {issue.get('body','')}".lower()
if any(k in title_body for k in ["security", "windows", "install", "provider", "model", "session", "license", "zed", "spam", "plugin"]):
items.append(issue_item(issue, "claw-issues", "issue_theme", "beta_adoption"))
for source_name in ["opencode", "codex"]:
repo_meta = load_json(research / f"{source_name}-repo.json")
items.append(repo_context_item(repo_meta, source_name))
heading_lines = [record.line for record in headings]
mapped_heading_lines = [item["source_line"] for item in items if item.get("source_type") == "roadmap_heading"]
duplicate_heading_lines = sorted(line for line in set(mapped_heading_lines) if mapped_heading_lines.count(line) != 1)
unmapped_heading_lines = sorted(set(heading_lines) - set(mapped_heading_lines))
board = {
"schema_version": "cc2.board.v1",
"generated_at": datetime.now(timezone.utc).replace(microsecond=0).isoformat(),
"generation_policy": {
"ultragoal_mutation": "forbidden",
"roadmap_coverage": "all markdown headings plus top-level ordered roadmap actions",
"status_values": sorted(STATUSES),
"release_buckets": sorted(RELEASE_BUCKETS),
},
"sources": {
"roadmap": {
"path": "ROADMAP.md",
"sha256_prefix": sha256_prefix(roadmap_path),
"heading_count": len(headings),
"ordered_action_count": len(actions),
},
"approved_plan": {
"path": ".omx/plans/claw-code-2-0-adaptive-plan.md",
"sha256_prefix": sha256_prefix(plan_path),
},
"research": {
"root": str(source_omx / "research"),
"claw_open_latest_count": len(latest_issues),
"claw_issues_count": len(all_issues),
"opencode_repo": ".omx/research/opencode-repo.json",
"codex_repo": ".omx/research/codex-repo.json",
},
},
"coverage": {
"roadmap_headings_total": len(headings),
"roadmap_headings_mapped": len(mapped_heading_lines),
"unmapped_roadmap_heading_lines": unmapped_heading_lines,
"duplicate_roadmap_heading_lines": duplicate_heading_lines,
"roadmap_actions_total": len(actions),
"roadmap_actions_mapped": len([item for item in items if item.get("source_type") == "roadmap_action"]),
},
"summary": {},
"items": items,
}
board["summary"] = {
"by_status": summarize_counts(items, "status"),
"by_release_bucket": summarize_counts(items, "release_bucket"),
"by_source_type": summarize_counts(items, "source_type"),
"by_owner_lane": summarize_counts(items, "owner_lane"),
}
errors = validate_board(board)
if errors:
raise SystemExit("board validation failed:\n" + "\n".join(errors))
return board
def main() -> int:
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("--repo-root", type=Path, default=Path.cwd())
parser.add_argument("--out-dir", type=Path, default=None)
args = parser.parse_args()
repo_root = args.repo_root.resolve()
out_dir = args.out_dir or (repo_root / ".omx" / "cc2")
out_dir.mkdir(parents=True, exist_ok=True)
board = build_board(repo_root)
board_json = out_dir / "board.json"
board_md = out_dir / "board.md"
board_json.write_text(json.dumps(board, indent=2, sort_keys=True) + "\n", encoding="utf-8")
renderer = repo_root / ".omx" / "cc2" / "render_board_md.py"
if renderer.exists():
subprocess.run([sys.executable, str(renderer), str(board_json), str(board_md)], check=True, cwd=str(repo_root))
else:
board_md.write_text(render_markdown(board) + "\n", encoding="utf-8")
print(f"wrote {board_json}")
print(f"wrote {board_md}")
print(f"roadmap headings mapped: {board['coverage']['roadmap_headings_mapped']}/{board['coverage']['roadmap_headings_total']}")
print(f"roadmap actions mapped: {board['coverage']['roadmap_actions_mapped']}/{board['coverage']['roadmap_actions_total']}")
return 0
if __name__ == "__main__":
raise SystemExit(main())

87
scripts/validate_cc2_board.py Executable file
View File

@@ -0,0 +1,87 @@
#!/usr/bin/env python3
"""Validate the generated Claw Code 2.0 board coverage and schema."""
from __future__ import annotations
import argparse
import json
import re
from pathlib import Path
REQUIRED = {
"id",
"title",
"source_anchor",
"source_type",
"release_bucket",
"status",
"dependencies",
"verification_required",
"deferral_rationale",
}
STATUSES = {
"context",
"active",
"open",
"done_verify",
"stale_done",
"superseded",
"deferred_with_rationale",
"rejected_not_claw",
}
def roadmap_heading_lines(path: Path) -> list[int]:
lines = []
for line_no, line in enumerate(path.read_text(encoding="utf-8").splitlines(), 1):
if re.match(r"^#{1,6}\s+", line):
lines.append(line_no)
return lines
def main() -> int:
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("--repo-root", type=Path, default=Path.cwd())
parser.add_argument("--board", type=Path, default=None)
args = parser.parse_args()
repo_root = args.repo_root.resolve()
board_path = args.board or (repo_root / ".omx" / "cc2" / "board.json")
board = json.loads(board_path.read_text(encoding="utf-8"))
errors: list[str] = []
ids = set()
for index, item in enumerate(board.get("items", []), 1):
missing = REQUIRED - set(item)
if missing:
errors.append(f"item {index} missing required fields: {sorted(missing)}")
if item.get("id") in ids:
errors.append(f"duplicate id: {item.get('id')}")
ids.add(item.get("id"))
if item.get("status") not in STATUSES:
errors.append(f"{item.get('id')} invalid status {item.get('status')}")
if not isinstance(item.get("dependencies"), list):
errors.append(f"{item.get('id')} dependencies must be list")
expected = roadmap_heading_lines(repo_root / "ROADMAP.md")
mapped = [item.get("source_line") for item in board.get("items", []) if item.get("source_type") == "roadmap_heading"]
unmapped = sorted(set(expected) - set(mapped))
duplicates = sorted(line for line in set(mapped) if mapped.count(line) != 1)
if unmapped:
errors.append(f"unmapped ROADMAP headings: {unmapped}")
if duplicates:
errors.append(f"duplicate ROADMAP heading mappings: {duplicates}")
coverage = board.get("coverage", {})
if coverage.get("roadmap_headings_total") != len(expected):
errors.append("coverage roadmap_headings_total does not match ROADMAP.md")
if coverage.get("roadmap_headings_mapped") != len(mapped):
errors.append("coverage roadmap_headings_mapped does not match board items")
if errors:
print("FAIL cc2 board validation")
for error in errors:
print(f"- {error}")
return 1
print("PASS cc2 board validation")
print(f"- board: {board_path}")
print(f"- items: {len(board.get('items', []))}")
print(f"- ROADMAP headings mapped: {len(mapped)}/{len(expected)}")
print(f"- ROADMAP actions mapped: {coverage.get('roadmap_actions_mapped')}/{coverage.get('roadmap_actions_total')}")
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -23,6 +23,7 @@ class PortingModule:
class PermissionDenial: class PermissionDenial:
tool_name: str tool_name: str
reason: str reason: str
status: str = 'blocked'
@dataclass(frozen=True) @dataclass(frozen=True)

150
src/path_scope.py Normal file
View File

@@ -0,0 +1,150 @@
from __future__ import annotations
import glob
import os
import re
import shlex
from dataclasses import dataclass
from pathlib import Path, PureWindowsPath
_GLOB_META = set('*?[')
_WINDOWS_DRIVE_RE = re.compile(r'^[A-Za-z]:[\\/]')
_WINDOWS_UNC_RE = re.compile(r'^(?:\\\\|//)[^\\/]+[\\/][^\\/]+')
_ENV_ASSIGNMENT_RE = re.compile(r'^[A-Za-z_][A-Za-z0-9_]*=')
@dataclass(frozen=True)
class PathScopeDecision:
allowed: bool
reason: str
candidate: str | None = None
resolved: str | None = None
@dataclass(frozen=True)
class WorkspacePathScope:
"""Validate tool/shell path operands against explicit workspace roots.
The policy is intentionally conservative for the Python port: any candidate
path that resolves outside the configured roots is denied, including paths
reached through symlinks or glob expansion. Windows drive/UNC paths are
treated as out-of-scope on POSIX roots unless an allowed root is also a
Windows-style root with the same prefix.
"""
roots: tuple[Path, ...]
@classmethod
def from_root(cls, root: str | Path) -> 'WorkspacePathScope':
return cls.from_roots((root,))
@classmethod
def from_roots(cls, roots: tuple[str | Path, ...] | list[str | Path]) -> 'WorkspacePathScope':
resolved_roots = tuple(Path(root).expanduser().resolve(strict=False) for root in roots)
if not resolved_roots:
raise ValueError('at least one workspace root is required')
return cls(resolved_roots)
def validate_payload(self, payload: str, cwd: str | Path | None = None) -> PathScopeDecision:
cwd_path = Path(cwd).expanduser().resolve(strict=False) if cwd else self.roots[0]
cwd_decision = self.validate_path(cwd_path)
if not cwd_decision.allowed:
return PathScopeDecision(False, f'cwd outside workspace scope: {cwd_path}', str(cwd_path), cwd_decision.resolved)
for candidate in extract_path_candidates(payload):
decision = self.validate_path(candidate, cwd_path)
if not decision.allowed:
return decision
return PathScopeDecision(True, 'all path candidates are inside workspace scope')
def validate_path(self, candidate: str | Path, cwd: str | Path | None = None) -> PathScopeDecision:
raw = os.path.expandvars(os.path.expanduser(str(candidate)))
if _is_windows_absolute(raw):
return self._validate_windows_path(raw)
base = Path(cwd).expanduser().resolve(strict=False) if cwd else self.roots[0]
path = Path(raw)
if not path.is_absolute():
path = base / path
expanded = self._expand_glob(path)
for expanded_path in expanded:
resolved = expanded_path.resolve(strict=False)
if not any(_is_relative_to(resolved, root) for root in self.roots):
return PathScopeDecision(
False,
'path resolves outside workspace scope',
str(candidate),
str(resolved),
)
return PathScopeDecision(True, 'path is inside workspace scope', str(candidate), str(expanded[0].resolve(strict=False)))
def _expand_glob(self, path: Path) -> tuple[Path, ...]:
path_text = str(path)
if any(char in path_text for char in _GLOB_META):
matches = tuple(Path(match) for match in glob.glob(path_text, recursive=True))
if matches:
return matches
# For unmatched globs, validate the stable non-glob parent prefix.
stable_parts: list[str] = []
for part in path.parts:
if any(char in part for char in _GLOB_META):
break
stable_parts.append(part)
if stable_parts:
return (Path(*stable_parts),)
return (path,)
def _validate_windows_path(self, raw: str) -> PathScopeDecision:
candidate = PureWindowsPath(raw)
for root in self.roots:
root_text = str(root)
if not _is_windows_absolute(root_text):
continue
try:
candidate.relative_to(PureWindowsPath(root_text))
return PathScopeDecision(True, 'windows path is inside workspace scope', raw, str(candidate))
except ValueError:
continue
return PathScopeDecision(False, 'windows absolute path is outside workspace scope', raw, str(candidate))
def extract_path_candidates(payload: str) -> tuple[str, ...]:
"""Return conservative path-like operands from a shell/tool payload."""
try:
tokens = shlex.split(payload, posix=True)
except ValueError:
tokens = payload.split()
raw_tokens = payload.split()
candidates: list[str] = []
for token in (*tokens, *raw_tokens):
if not token or token.startswith('-') or _ENV_ASSIGNMENT_RE.match(token):
continue
expanded = os.path.expandvars(os.path.expanduser(token))
if _looks_like_path(token) or _looks_like_path(expanded):
candidate = expanded if _looks_like_path(expanded) else token
if candidate not in candidates:
candidates.append(candidate)
return tuple(candidates)
def _looks_like_path(token: str) -> bool:
return (
token in {'.', '..'}
or token.startswith(('./', '../', '/', '~/', '~/'))
or '..' in token.split('/')
or '/' in token
or '\\' in token
or any(char in token for char in _GLOB_META)
or _is_windows_absolute(token)
)
def _is_windows_absolute(value: str) -> bool:
return bool(_WINDOWS_DRIVE_RE.match(value) or _WINDOWS_UNC_RE.match(value))
def _is_relative_to(path: Path, root: Path) -> bool:
try:
path.relative_to(root)
return True
except ValueError:
return False

View File

@@ -1,20 +1,49 @@
from __future__ import annotations from __future__ import annotations
from dataclasses import dataclass, field from dataclasses import dataclass, field
from pathlib import Path
from .path_scope import PathScopeDecision, WorkspacePathScope
@dataclass(frozen=True) @dataclass(frozen=True)
class ToolPermissionContext: class ToolPermissionContext:
deny_names: frozenset[str] = field(default_factory=frozenset) deny_names: frozenset[str] = field(default_factory=frozenset)
deny_prefixes: tuple[str, ...] = () deny_prefixes: tuple[str, ...] = ()
workspace_scope: WorkspacePathScope | None = None
cwd: Path | None = None
@classmethod @classmethod
def from_iterables(cls, deny_names: list[str] | None = None, deny_prefixes: list[str] | None = None) -> 'ToolPermissionContext': def from_iterables(
cls,
deny_names: list[str] | None = None,
deny_prefixes: list[str] | None = None,
workspace_root: str | Path | None = None,
workspace_roots: list[str | Path] | tuple[str | Path, ...] | None = None,
cwd: str | Path | None = None,
) -> 'ToolPermissionContext':
roots: list[str | Path] = []
if workspace_roots:
roots.extend(workspace_roots)
if workspace_root is not None:
roots.append(workspace_root)
return cls( return cls(
deny_names=frozenset(name.lower() for name in (deny_names or [])), deny_names=frozenset(name.lower() for name in (deny_names or [])),
deny_prefixes=tuple(prefix.lower() for prefix in (deny_prefixes or [])), deny_prefixes=tuple(prefix.lower() for prefix in (deny_prefixes or [])),
workspace_scope=WorkspacePathScope.from_roots(roots) if roots else None,
cwd=Path(cwd).expanduser().resolve(strict=False) if cwd is not None else None,
) )
def blocks(self, tool_name: str) -> bool: def blocks(self, tool_name: str) -> bool:
lowered = tool_name.lower() lowered = tool_name.lower()
return lowered in self.deny_names or any(lowered.startswith(prefix) for prefix in self.deny_prefixes) return lowered in self.deny_names or any(lowered.startswith(prefix) for prefix in self.deny_prefixes)
def validate_payload_scope(self, tool_name: str, payload: str) -> PathScopeDecision:
if self.workspace_scope is None or not _scope_checked_tool(tool_name):
return PathScopeDecision(True, 'workspace path scope not required for this tool')
return self.workspace_scope.validate_payload(payload, cwd=self.cwd)
def _scope_checked_tool(tool_name: str) -> bool:
lowered = tool_name.lower()
return any(marker in lowered for marker in ('bash', 'shell', 'powershell', 'fileread', 'filewrite', 'fileedit'))

View File

@@ -82,6 +82,7 @@ class QueryEnginePort:
f'Matched commands: {", ".join(matched_commands) if matched_commands else "none"}', f'Matched commands: {", ".join(matched_commands) if matched_commands else "none"}',
f'Matched tools: {", ".join(matched_tools) if matched_tools else "none"}', f'Matched tools: {", ".join(matched_tools) if matched_tools else "none"}',
f'Permission denials: {len(denied_tools)}', f'Permission denials: {len(denied_tools)}',
*(f'Permission denial: {denial.tool_name} status={denial.status} reason={denial.reason}' for denial in denied_tools),
] ]
output = self._format_output(summary_lines) output = self._format_output(summary_lines)
projected_usage = self.total_usage.add_turn(prompt, output) projected_usage = self.total_usage.add_turn(prompt, output)
@@ -116,7 +117,13 @@ class QueryEnginePort:
if matched_tools: if matched_tools:
yield {'type': 'tool_match', 'tools': matched_tools} yield {'type': 'tool_match', 'tools': matched_tools}
if denied_tools: if denied_tools:
yield {'type': 'permission_denial', 'denials': [denial.tool_name for denial in denied_tools]} yield {
'type': 'permission_denial',
'denials': [
{'tool_name': denial.tool_name, 'reason': denial.reason, 'status': denial.status}
for denial in denied_tools
],
}
result = self.submit_message(prompt, matched_commands, matched_tools, denied_tools) result = self.submit_message(prompt, matched_commands, matched_tools, denied_tools)
yield {'type': 'message_delta', 'text': result.output} yield {'type': 'message_delta', 'text': result.output}
yield { yield {

View File

@@ -78,10 +78,25 @@ def find_tools(query: str, limit: int = 20) -> list[PortingModule]:
return matches[:limit] return matches[:limit]
def execute_tool(name: str, payload: str = '') -> ToolExecution: def execute_tool(name: str, payload: str = '', permission_context: ToolPermissionContext | None = None) -> ToolExecution:
module = get_tool(name) module = get_tool(name)
if module is None: if module is None:
return ToolExecution(name=name, source_hint='', payload=payload, handled=False, message=f'Unknown mirrored tool: {name}') return ToolExecution(name=name, source_hint='', payload=payload, handled=False, message=f'Unknown mirrored tool: {name}')
if permission_context and permission_context.blocks(module.name):
return ToolExecution(name=module.name, source_hint=module.source_hint, payload=payload, handled=False, message=f"Permission denied for mirrored tool '{module.name}'.")
if permission_context:
scope_decision = permission_context.validate_payload_scope(module.name, payload)
if not scope_decision.allowed:
return ToolExecution(
name=module.name,
source_hint=module.source_hint,
payload=payload,
handled=False,
message=(
f"Permission denied for mirrored tool '{module.name}': {scope_decision.reason}"
f" (candidate={scope_decision.candidate!r}, resolved={scope_decision.resolved!r})."
),
)
action = f"Mirrored tool '{module.name}' from {module.source_hint} would handle payload {payload!r}." action = f"Mirrored tool '{module.name}' from {module.source_hint} would handle payload {payload!r}."
return ToolExecution(name=module.name, source_hint=module.source_hint, payload=payload, handled=True, message=action) return ToolExecution(name=module.name, source_hint=module.source_hint, payload=payload, handled=True, message=action)

View File

@@ -0,0 +1,135 @@
from __future__ import annotations
import os
import tempfile
import unittest
from pathlib import Path
from src.models import PermissionDenial
from src.path_scope import WorkspacePathScope, extract_path_candidates
from src.permissions import ToolPermissionContext
from src.query_engine import QueryEnginePort
from src.tools import execute_tool
class WorkspacePathScopeTests(unittest.TestCase):
def test_direct_parent_escape_is_denied(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
workspace = Path(tmp) / 'workspace'
workspace.mkdir()
decision = WorkspacePathScope.from_root(workspace).validate_payload('cat ../secret.txt')
self.assertFalse(decision.allowed)
self.assertIn('outside workspace scope', decision.reason)
def test_issue_3007_symlink_escape_is_denied(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
root = Path(tmp)
workspace = root / 'workspace'
outside = root / 'outside'
workspace.mkdir()
outside.mkdir()
(outside / 'secret.txt').write_text('secret')
link = workspace / 'linked-outside'
link.symlink_to(outside, target_is_directory=True)
decision = WorkspacePathScope.from_root(workspace).validate_payload('cat linked-outside/secret.txt')
self.assertFalse(decision.allowed)
self.assertIn(str(outside.resolve()), decision.resolved or '')
def test_glob_expansion_must_stay_inside_workspace(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
root = Path(tmp)
workspace = root / 'workspace'
outside = root / 'outside'
workspace.mkdir()
outside.mkdir()
(outside / 'secret.txt').write_text('secret')
decision = WorkspacePathScope.from_root(workspace).validate_payload(f'cat {outside}/*.txt')
self.assertFalse(decision.allowed)
self.assertEqual(str((outside / 'secret.txt').resolve()), decision.resolved)
def test_shell_environment_expansion_is_validated(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
root = Path(tmp)
workspace = root / 'workspace'
outside = root / 'outside'
workspace.mkdir()
outside.mkdir()
previous = os.environ.get('CLAW_SCOPE_OUTSIDE')
os.environ['CLAW_SCOPE_OUTSIDE'] = str(outside)
try:
self.assertEqual((f'{outside}/secret.txt',), extract_path_candidates('cat $CLAW_SCOPE_OUTSIDE/secret.txt'))
decision = WorkspacePathScope.from_root(workspace).validate_payload('cat $CLAW_SCOPE_OUTSIDE/secret.txt')
finally:
if previous is None:
os.environ.pop('CLAW_SCOPE_OUTSIDE', None)
else:
os.environ['CLAW_SCOPE_OUTSIDE'] = previous
self.assertFalse(decision.allowed)
self.assertIn(str(outside.resolve()), decision.resolved or '')
def test_explicit_worktree_roots_are_allowed(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
root = Path(tmp)
workspace = root / 'workspace'
worktree = root / 'worktree'
workspace.mkdir()
worktree.mkdir()
(worktree / 'file.txt').write_text('ok')
decision = WorkspacePathScope.from_roots((workspace, worktree)).validate_payload(f'cat {worktree}/file.txt')
self.assertTrue(decision.allowed, decision.reason)
def test_windows_absolute_paths_are_denied_for_posix_workspace(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
workspace = Path(tmp) / 'workspace'
workspace.mkdir()
drive_decision = WorkspacePathScope.from_root(workspace).validate_payload(r'type C:\Users\other\secret.txt')
unc_decision = WorkspacePathScope.from_root(workspace).validate_payload(r'type \\server\share\secret.txt')
self.assertFalse(drive_decision.allowed)
self.assertIn('windows absolute path', drive_decision.reason)
self.assertFalse(unc_decision.allowed)
self.assertIn('windows absolute path', unc_decision.reason)
def test_file_and_shell_tools_use_workspace_scope_context(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
root = Path(tmp)
workspace = root / 'workspace'
outside = root / 'outside'
workspace.mkdir()
outside.mkdir()
context = ToolPermissionContext.from_iterables(workspace_root=workspace, cwd=workspace)
file_result = execute_tool('FileReadTool', f'{outside}/secret.txt', permission_context=context)
shell_result = execute_tool('BashTool', f'cat {outside}/secret.txt', permission_context=context)
inside_result = execute_tool('FileReadTool', './allowed.txt', permission_context=context)
self.assertFalse(file_result.handled)
self.assertIn('Permission denied', file_result.message)
self.assertFalse(shell_result.handled)
self.assertIn('Permission denied', shell_result.message)
self.assertTrue(inside_result.handled)
def test_permission_denial_stream_events_expose_status_and_reason(self) -> None:
engine = QueryEnginePort.from_workspace()
denial = PermissionDenial('BashTool', 'path resolves outside workspace scope')
events = list(engine.stream_submit_message('cat ../secret.txt', matched_tools=('BashTool',), denied_tools=(denial,)))
permission_event = next(event for event in events if event['type'] == 'permission_denial')
result = engine.submit_message('cat ../secret.txt', matched_tools=('BashTool',), denied_tools=(denial,))
self.assertEqual('blocked', permission_event['denials'][0]['status'])
self.assertEqual('path resolves outside workspace scope', permission_event['denials'][0]['reason'])
self.assertIn('status=blocked', result.output)
self.assertIn('path resolves outside workspace scope', result.output)
if __name__ == '__main__':
unittest.main()