omx(team): merge worker-2

This commit is contained in:
bellman
2026-05-15 11:39:07 +09:00
4 changed files with 243 additions and 2 deletions

View File

@@ -36,7 +36,7 @@ Claw Code is the public Rust implementation of the `claw` CLI agent harness.
The canonical implementation lives in [`rust/`](./rust), and the current source of truth for this repository is **ultraworkers/claw-code**.
> [!IMPORTANT]
> Start with [`USAGE.md`](./USAGE.md) for build, auth, CLI, session, and parity-harness workflows. Windows users can jump to the PowerShell-first [Windows install and release quickstart](./docs/windows-install-release.md). Make `claw doctor` your first health check after building, use [`rust/README.md`](./rust/README.md) for crate-level details, read [`PARITY.md`](./PARITY.md) for the current Rust-port checkpoint, and see [`docs/container.md`](./docs/container.md) for the container-first workflow.
> Start with [`USAGE.md`](./USAGE.md) for build, auth, CLI, session, and parity-harness workflows. For file submission/navigation questions, see [Navigation and file context](./docs/navigation-file-context.md). For local OpenAI-compatible models and offline skill installs, see [Local OpenAI-compatible providers and skills setup](./docs/local-openai-compatible-providers.md). Windows users can jump to the PowerShell-first [Windows install and release quickstart](./docs/windows-install-release.md). Make `claw doctor` your first health check after building, use [`rust/README.md`](./rust/README.md) for crate-level details, read [`PARITY.md`](./PARITY.md) for the current Rust-port checkpoint, and see [`docs/container.md`](./docs/container.md) for the container-first workflow.
>
> **ACP / Zed status:** `claw-code` does not ship an ACP/Zed daemon entrypoint yet. Run `claw acp` (or `claw --acp`) for the current status instead of guessing from source layout; `claw acp serve` is currently a discoverability alias only, and real ACP support remains tracked separately in `ROADMAP.md`.
@@ -206,6 +206,8 @@ cargo test --workspace
## Documentation map
- [`USAGE.md`](./USAGE.md) — quick commands, auth, sessions, config, parity harness
- [`docs/navigation-file-context.md`](./docs/navigation-file-context.md) — terminal navigation, scrollback, `@path` file context, attachments, and secret-safety guidance
- [`docs/local-openai-compatible-providers.md`](./docs/local-openai-compatible-providers.md) — Ollama/llama.cpp/vLLM setup, Claw multi-provider positioning, and local skills install checks
- [`docs/windows-install-release.md`](./docs/windows-install-release.md) — PowerShell-first install, release artifact, provider switching, and Windows/WSL notification smoke paths
- [`rust/README.md`](./rust/README.md) — crate map, CLI surface, features, workspace layout
- [`PARITY.md`](./PARITY.md) — parity status for the Rust port

View File

@@ -260,7 +260,7 @@ See the full [Windows install and release quickstart](./docs/windows-install-rel
## Local Models
`claw` can talk to local servers and provider gateways through either Anthropic-compatible or OpenAI-compatible endpoints. Use `ANTHROPIC_BASE_URL` with `ANTHROPIC_AUTH_TOKEN` for Anthropic-compatible services, or `OPENAI_BASE_URL` with `OPENAI_API_KEY` for OpenAI-compatible services.
`claw` can talk to local servers and provider gateways through either Anthropic-compatible or OpenAI-compatible endpoints. Use `ANTHROPIC_BASE_URL` with `ANTHROPIC_AUTH_TOKEN` for Anthropic-compatible services, or `OPENAI_BASE_URL` with `OPENAI_API_KEY` for OpenAI-compatible services. For copyable Ollama, llama.cpp, vLLM, raw `/v1/chat/completions`, and local skills install examples, see [`docs/local-openai-compatible-providers.md`](./docs/local-openai-compatible-providers.md).
### Anthropic-compatible endpoint
@@ -387,8 +387,16 @@ The API layer exposes a provider diagnostics snapshot via `api::provider_diagnos
For gateway features that are not first-class request fields yet, `MessageRequest::extra_body` passes through provider-specific JSON parameters such as `web_search_options` or `parallel_tool_calls`. Core protocol fields (`model`, `messages`, `stream`, `tools`, `tool_choice`, `max_tokens`, and `max_completion_tokens`) are protected and cannot be overridden through `extra_body`.
## File context and navigation
Use `@path/to/file` in prompts to submit repository files as context, for example `Read @src/app.ts and explain the bug`, `Compare @old.md and @new.md`, or `Use @logs/error.txt as context and suggest a fix`. Prompt history, `Ctrl-r`, and long-output scrolling come from your shell, terminal, or tmux rather than from Claw itself. See [`docs/navigation-file-context.md`](./docs/navigation-file-context.md) for scrollback, attachment, and secret-redaction guidance.
## FAQ
### Is Claw Code Claude-only?
No. Claw Code is a Claude-Code-shaped workflow/runtime, not a Claude-only product. It can target Anthropic and OpenAI-compatible/provider-routed/local models depending on config. Non-Claude providers may require stricter response-shape and tool-call compatibility, so some workflows can be rougher than first-party Anthropic/OpenAI paths; provider-specific identity leaks are bugs, not product intent. See [`docs/local-openai-compatible-providers.md`](./docs/local-openai-compatible-providers.md) for local provider examples.
### What about Codex?
The name "codex" appears in the Claw Code ecosystem but it does **not** refer to OpenAI Codex (the code-generation model). Here is what it means in this project:
@@ -442,6 +450,18 @@ let client = build_http_client_with(&config).expect("proxy client");
- Empty values are treated as unset, so leaving `HTTPS_PROXY=""` in your shell will not enable a proxy.
- If a proxy URL cannot be parsed, `claw` falls back to a direct (no-proxy) client so existing workflows keep working; double-check the URL if you expected the request to be tunnelled.
## Skills
Use `/skills list` in the interactive REPL or `claw skills --output-format json` from the direct CLI to inspect installed skills. For offline/local installs, install the directory that contains `SKILL.md`, then verify the discovered name before invoking it:
```text
/skills install /absolute/path/to/my-skill
/skills list
/skills my-skill
```
If install succeeds but invocation fails with a provider HTTP error, treat provider setup separately: run `claw doctor` and a one-shot prompt smoke test before reinstalling the skill. See [`docs/local-openai-compatible-providers.md`](./docs/local-openai-compatible-providers.md#local-skills-install-from-disk) for the full checklist.
## Common operational commands
```bash

View File

@@ -0,0 +1,150 @@
# Local OpenAI-compatible providers and skills setup
This guide covers two common offline/local workflows:
1. running Claw against an OpenAI-compatible local model server such as Ollama, llama.cpp, or vLLM; and
2. installing local skills from disk so Claw can discover them without network access.
## Claw is not Claude-only
Claw Code is a Claude-Code-shaped workflow/runtime, not a Claude-only product. It supports Anthropic directly and can target OpenAI-compatible, provider-routed, and local models depending on configuration. Non-Claude providers are supported honestly: they may require stricter tool-call and response-shape compatibility, and some slash/tool workflows can be rougher than first-party Anthropic/OpenAI paths. Provider-specific identity leaks are bugs, not intended product positioning.
If you need the most polished daily-driver experience for a specific non-Claude model today, compare that providers native tools. If you need runtime/provider hackability, Claws OpenAI-compatible route is the intended extension path.
## OpenAI-compatible routing basics
Set `OPENAI_BASE_URL` to the servers `/v1` endpoint and set `OPENAI_API_KEY` to either the required token or a harmless placeholder for local servers that expect an Authorization header. The model name must match what the server exposes.
```bash
export OPENAI_BASE_URL="http://127.0.0.1:11434/v1"
export OPENAI_API_KEY="local-dev-token"
claw --model "qwen3:latest" prompt "Reply exactly HELLO_WORLD_123"
```
Routing notes:
- Use the `openai/` prefix for OpenAI-compatible gateways when you need prefix routing to win over ambient Anthropic credentials, for example `--model "openai/gpt-4.1-mini"` with OpenRouter.
- For local servers, prefer the exact model ID reported by the server (`qwen3:latest`, `llama3.2`, `Qwen/Qwen2.5-Coder-7B-Instruct`, etc.). If your local gateway exposes slash-containing IDs, use that exact slug.
- If you have multiple provider keys in your environment, remove unrelated keys while smoke-testing a local route or choose a model prefix that unambiguously selects the intended provider.
- Tool workflows need model/server support for OpenAI-compatible tool calls. Plain prompt smoke tests can pass even when slash/tool workflows still fail because the server returns an incompatible tool-call shape.
## Raw `/v1/chat/completions` smoke test
Before debugging Claw, verify the local server speaks the expected wire format:
```bash
curl -sS "$OPENAI_BASE_URL/chat/completions" \
-H "Authorization: Bearer ${OPENAI_API_KEY:-local-dev-token}" \
-H "Content-Type: application/json" \
-d '{
"model": "qwen3:latest",
"messages": [{"role": "user", "content": "Reply exactly HELLO_WORLD_123"}],
"stream": false
}'
```
Expected result: a JSON response with one assistant message containing `HELLO_WORLD_123`. If this fails, fix the local server, model name, or auth token before changing Claw settings.
## Ollama
Start Ollama and pull a model:
```bash
ollama pull qwen3:latest
ollama serve
```
In another shell:
```bash
export OPENAI_BASE_URL="http://127.0.0.1:11434/v1"
export OPENAI_API_KEY="local-dev-token"
claw --model "qwen3:latest" prompt "Reply exactly HELLO_WORLD_123"
```
If Ollama is running without auth and your build accepts authless local OpenAI-compatible servers, `unset OPENAI_API_KEY` is also acceptable. Use a placeholder token rather than a real cloud API key for local testing.
## llama.cpp server
Start a llama.cpp OpenAI-compatible server with the model name you want Claw to send:
```bash
llama-server -m ./models/qwen2.5-coder.gguf --host 127.0.0.1 --port 8080 --alias qwen2.5-coder
```
Then smoke-test through Claw:
```bash
export OPENAI_BASE_URL="http://127.0.0.1:8080/v1"
export OPENAI_API_KEY="local-dev-token"
claw --model "qwen2.5-coder" prompt "Reply exactly HELLO_WORLD_123"
```
## vLLM or another OpenAI-compatible server
Start vLLM with an OpenAI-compatible API server:
```bash
vllm serve Qwen/Qwen2.5-Coder-7B-Instruct --host 127.0.0.1 --port 8000
```
Then route Claw to it:
```bash
export OPENAI_BASE_URL="http://127.0.0.1:8000/v1"
export OPENAI_API_KEY="local-dev-token"
claw --model "Qwen/Qwen2.5-Coder-7B-Instruct" prompt "Reply exactly HELLO_WORLD_123"
```
## Local skills install from disk
Skills are discovered from Claw skill roots such as `.claw/skills/` in a workspace and `~/.claw/skills/` for user-level installs. Legacy `.codex/skills/` roots may also be scanned for compatibility, but new local Claw projects should prefer `.claw/skills/`.
A skill directory should contain a `SKILL.md` file with frontmatter:
```text
my-skill/
└── SKILL.md
```
```markdown
---
name: my-skill
description: Explain when this skill should be used.
---
# My Skill
Instructions for the agent go here.
```
Install a skill from a local path in the interactive REPL:
```text
/skills install /absolute/path/to/my-skill
/skills list
/skills my-skill
```
Or inspect skills from the direct CLI surface:
```bash
claw skills --output-format json
```
Offline install checklist:
- Install the specific skill directory, not only the repository root, unless that repository root itself contains `SKILL.md`.
- Keep the frontmatter `name` aligned with the directory name users will type.
- After installing, run `/skills list` or `claw skills --output-format json` to confirm the discovered name and source path.
- If a skill invocation fails with an HTTP/provider error, the skill may have installed correctly but the current model/provider call failed. Run `claw doctor`, verify provider credentials, and try a simple prompt smoke test before reinstalling the skill.
## Troubleshooting
| Symptom | Check |
|---|---|
| Claw still asks for Anthropic credentials | Use an explicit OpenAI-compatible model route or remove unrelated Anthropic env vars during local smoke tests. |
| `model not found` from local server | Use the exact model ID exposed by Ollama/llama.cpp/vLLM. |
| Plain prompt works but tools fail | Confirm the model/server supports OpenAI-compatible tool calls and response shapes. |
| Skill says installed but `/skills <name>` fails | Check `/skills list` for the discovered name and source; verify provider credentials separately with `claw doctor`. |
| A local docs/log file contains secrets | Redact it before using `@path` file context or attaching it to an issue. |

View File

@@ -0,0 +1,69 @@
# Navigation and file context guide
This guide answers the common “how do I browse output?” and “how do I submit a file?” questions for Claw Code. Claw is an agent CLI, not a full file manager: terminal navigation comes from your shell or terminal, while file context is passed explicitly in prompts.
## Prompt and terminal navigation
Use your terminals normal controls for command history and long output:
- `Up` / `Down` usually move through shell or REPL prompt history.
- `Ctrl-r` searches shell history in most shells.
- Long command output is viewed with your terminal scrollback. In tmux, enter copy mode with `Ctrl-b [` then use arrows, PageUp/PageDown, search, or your mouse depending on tmux config.
- If output is too large to scroll comfortably, redirect it to a file and give that file to Claw as context:
```bash
cargo test --workspace 2>&1 | tee logs/test-output.txt
claw prompt "Use @logs/test-output.txt as context and summarize the failing tests."
```
Claw may provide slash commands that inspect workspace state, but those commands do not replace your terminals scrollback or shell history.
## Submit repository files with `@path`
Mention files from the current workspace with `@` paths. Use relative paths from the repository or current working directory:
```text
Read @src/app.ts and explain the bug.
Compare @old.md and @new.md.
Use @logs/error.txt as context and suggest a fix.
Review @README.md and @docs/navigation-file-context.md for consistency.
```
Tips:
- Prefer the smallest useful file set. Large directories or logs can consume context quickly.
- Use exact paths when possible (`@rust/crates/runtime/src/lib.rs`) instead of vague descriptions.
- For generated logs, save them under a temporary or ignored directory such as `logs/` and reference the file.
- If the file is outside the repository, copy it into a safe workspace location first or use an app/UI attachment feature if your Claw surface supports attachments.
## Browse or inspect files
Claw can answer questions about files you reference, and you can ask it to inspect likely locations:
```text
Find where provider routing is implemented and summarize the relevant files.
Read @USAGE.md and tell me where local model setup is documented.
Search for the command that handles skills install, then explain the control flow.
```
For deterministic shell-side browsing, ordinary commands still work:
```bash
find docs -maxdepth 2 -type f | sort
rg -n "OPENAI_BASE_URL|skills install" USAGE.md docs rust
sed -n '250,340p' USAGE.md
```
## Attach external files where supported
Some UI surfaces let you drag and drop or attach files directly. When that is available, use attachments for files that should not be committed to the repo. In terminal-only usage, copy the file into the workspace, reference it with `@path`, then remove it when finished if it was temporary.
## Secret and credential safety
Do not paste real API keys, OAuth tokens, private logs, or customer data into prompts, issue comments, screenshots, or committed docs. Before submitting a file:
- Replace live keys with placeholders such as `sk-ant-REPLACE_ME`, `sk-or-v1-REPLACE_ME`, or `local-dev-token`.
- Redact bearer tokens, cookies, session IDs, and private base URLs.
- Prefer minimal reproductions over full production logs.
- Keep `.env`, key files, and private logs out of git.
If a task requires credentials, describe the variable names and expected shapes instead of sharing the values.