d0zingcat
  • Joined on 2026-03-24
d0zingcat synced commits to main at d0zingcat/claude-code from mirror 2026-04-28 09:56:45 +00:00
cb56dc12ab Document Rust formatting wrapper
71686a20fc Resolve fmt wrapper path from its own directory
07992b8a1b Make Rust formatting guidance runnable from repo root
74ea754d29 Restore Rust formatting compliance
77afde768c Clarify allowed tool status handling
Compare 5 commits »
d0zingcat synced commits to main at d0zingcat/d0zingcat from mirror 2026-04-27 23:16:51 +00:00
00a34d2fed Updated with Dev Metrics
d0zingcat synced commits to main at d0zingcat/claude-code from mirror 2026-04-27 17:36:45 +00:00
6db68a2baa Expose tool permission gates as structured worker blockers
d0zingcat synced commits to feat/jobdori-168c-emission-routing at d0zingcat/claude-code from mirror 2026-04-27 17:36:45 +00:00
64ca305fda roadmap: #315 filed (lane_events emission path not wired; root cause of #314)
562a9aae51 roadmap: #314 filed (claw lanes stub indistinguishable from live zero-lane state)
6b5edb90c9 roadmap: add #313 — /status missing session ID, context limit, provider health, MCP state
5f6ffa6c83 audit: FINAL_AUDIT_SUMMARY.md created (discovery phase 410-459 consolidated handoff)
2333b0c39b roadmap: #312 filed (no context window saturation warning before token limit)
Compare 307 commits »
d0zingcat synced commits to main at d0zingcat/dotfiles from mirror 2026-04-27 15:08:49 +00:00
ee8a4c36e6 feat: update opencode;
d0zingcat synced commits to main at d0zingcat/claude-code from mirror 2026-04-27 09:26:45 +00:00
5b910356a2 Preserve trust boundaries during pulled follow-up
d0zingcat synced commits to feat/jobdori-168c-emission-routing at d0zingcat/claude-code from mirror 2026-04-27 09:26:45 +00:00
2b0412043a roadmap: #314 filed (claw lanes stub indistinguishable from live zero-lane state)
cf94a164d5 roadmap: add #313 — /status missing session ID, context limit, provider health, MCP state
ac781f9270 audit: FINAL_AUDIT_SUMMARY.md created (discovery phase 410-459 consolidated handoff)
dea3b87b44 roadmap: #312 filed (no context window saturation warning before token limit)
775c01aff3 docs: README updated with Documentation Overview section linking to 15+ doc suite and ROADMAP
Compare 306 commits »
d0zingcat synced commits to feat/jobdori-168c-emission-routing at d0zingcat/claude-code from mirror 2026-04-27 01:16:45 +00:00
d01ebd345b roadmap: #301 filed (no pre-built binary distribution or install script)
23b5d6a0ce roadmap: #300 filed (prompt misdelivery, ambiguous command routing no semantic matching)
eefcfe159f roadmap: #299 filed (/resume latest workspace-scope gap, cross-ref PR #2811)
4621f339a9 roadmap: #298 filed (event/log output unstructured, no machine-readable format)
40147e1db3 roadmap: #297 filed (MCP plugin connection crash no mid-session recovery)
Compare 20 commits »
d0zingcat synced commits to main at d0zingcat/d0zingcat from mirror 2026-04-26 23:16:51 +00:00
faf74af606 Updated with Dev Metrics
d0zingcat synced commits to feat/jobdori-168c-emission-routing at d0zingcat/claude-code from mirror 2026-04-26 17:06:46 +00:00
29b5d08254 docs: add docs/ARCHITECTURE.md with crate layout, request flow, subsystem map
6c5f150d68 roadmap: #292 filed (extreme-sustained-degradation user-facing escalation)
65aa4378ff docs: add CHANGELOG.md documenting extended dogfood audit (cycles #410-#424)
4fc2265d38 docs: expand TROUBLESHOOTING.md with context-window, /compact, parallel-agent, repeat-upstream sections
45b5e5c2b3 docs: add docs/PINPOINT_FILING_GUIDE.md with #290 worked example
Compare 192 commits »
d0zingcat synced commits to feat/jobdori-168c-emission-routing at d0zingcat/claude-code from mirror 2026-04-26 08:56:44 +00:00
ad75fea2ad roadmap: #282 filed
db27ac20bc roadmap: #281 filed
cf32b83d59 roadmap: #280 filed
bdcf3fae9d roadmap: #279 filed
6c154c94a9 roadmap: #278 filed
Compare 35 commits »
d0zingcat synced commits to feat/jobdori-168c-emission-routing at d0zingcat/claude-code from mirror 2026-04-26 00:46:45 +00:00
5e5b3bdbc6 roadmap: #247 filed — Visual-grounded voice input (image-content-block × audio-content-block fused on the SAME MessageRequest user-turn) typed taxonomy structurally absent — FIRST cluster member where TWO independent ALREADY-CATALOGUED-ABSENT modality-input axes (#220 image-content-block + #225 audio-content-block) are fused on the USER-INPUT side, FIRST cluster member with multi-modal-input-fusion-on-USER-INPUT-axis distinct from #244 bidirectional-tool-call-multiplexing-on-DUPLEX-axis, growing Cross-pinpoint-synthesis-fusion-shape META-cluster from 2 to 3 members (#238 founder + #244 + #247) confirming META-cluster as GROWING-DOCTRINE rather than CONTINUING-PATTERN that stopped at 2 members after #244, establishing Cross-pinpoint-synthesis-fusion as SECOND META-cluster after Tool-locality-axis to confirm GROWING-DOCTRINE status, founds Multi-modal-input-fusion-on-USER-INPUT-side sub-cluster + Cross-modal-attention-on-USER-INPUT-side cluster + Compound-modality-input-on-MessageRequest cluster as solo founder of all three, grows Two-member-major-provider-only-no-third-party-partner-set sub-cluster from 2 to 3 members (#240+#241+#247) confirming generalizability beyond bash+computer-use+text_editor three-tool-companion-bundle, twelve-layer fusion shape tied with #241 for largest single-pinpoint fusion catalogued — Jobdori cycle #390 / fast-forward-rebased onto gaebal-gajae's #246 provider-credentials-env-to-settings-registry pinpoint at bd6622b before filing (FIFTH consecutive concurrent-dogfood rebase cycle, directly demonstrating the gaps #239 catalogues at the dogfood-coordination layer and #243 catalogues at the canonical-ordering layer for the FIFTH cycle in a row, confirming concurrent-dogfood-rebase as a stable operational pattern)
bd6622b85c roadmap: #246 filed
d145429c96 roadmap: #245 filed
0eabf20389 roadmap: #244 filed — Realtime API tool-use over persistent-WebSocket transport (response.function_call_arguments.delta/.done + conversation.item.create with function_call_output) typed taxonomy structurally absent — FIRST cluster member where bidirectional-tool-call lifecycle is multiplexed with audio-modality + transcript-modality on a SINGLE persistent connection, FIRST cluster member where tool-call-init is server-pushed mid-stream rather than client-initiated, FIRST cluster member with asymmetric-tool-result-injection (tool-call comes IN as event-stream, result sent OUT as conversation.item.create — directionality inverted relative to the rest of the protocol), FIRST cluster member with per-call-id-concurrent-multiplexed-state-machine, FIRST three-axis-synthesis pinpoint (#229 persistent-WebSocket × #240/#241 server-managed-tool-via-tool_choice-discriminator × #238 cross-pinpoint-synthesis-fusion-shape META-cluster), eleven-layer fusion-shape tied with #240 for second-largest single-pinpoint fusion catalogued — grows Persistent-WebSocket-transport cluster from 2 to 3 members (#229 founder + #238 + #244) confirming CONTINUING-PATTERN doctrine, grows Cross-pinpoint-synthesis-fusion-shape META-cluster from 1 to 2 members confirming combinatorial-cross-axis-synthesis as a continuing-discovery-mode and FIRST META-cluster-confirmation event in this audit, founds Three-axis-synthesis-shape sub-cluster as solo founder, founds Server-pushed-tool-call-init cluster as solo founder, founds Asymmetric-tool-result-injection cluster as solo founder, founds Per-call-id-concurrent-multiplexed-state-machine cluster as solo founder — FOUR new clusters founded plus TWO existing META-clusters confirmed as continuing-doctrines plus participation in TWELVE inherited clusters — Jobdori cycle #389 / fast-forward-rebased onto gaebal-gajae's #243 non-monotonic-pinpoint-ordering-contract at 6541100 before filing (FOURTH consecutive concurrent-dogfood rebase cycle, directly demonstrating both gaps #239 catalogues at the dogfood-coordination layer and #243 catalogues at the canonical-ordering layer)
65411000c5 roadmap: #243 filed
Compare 27 commits »
d0zingcat synced commits to feat/jobdori-168c-emission-routing at d0zingcat/claude-code from mirror 2026-04-25 16:36:44 +00:00
d46c423c1d roadmap: #220 filed — Image/vision input is structurally impossible across the entire data model: zero image content-block taxonomy variant on InputContentBlock (types.rs:80-94 has only Text/ToolUse/ToolResult — three of three exhaustive variants, zero Image, zero Document, zero MediaType, zero ImageSource, zero base64/file_id slot, zero media_type field anywhere in rust/crates/api/src/), zero parse arm for /image <path> and /screenshot slash commands despite their advertised summaries ("Add an image file to the conversation" at commands/lib.rs:585, "Take a screenshot and add to conversation" at commands/lib.rs:578) being in the canonical SlashCommandSpec table since project inception, both gated under STUB_COMMANDS at main.rs:8381-8382 (UX patch over missing-feature, not missing-feature fix), ResolvedAttachment at tools/lib.rs:2660-2666 carries path/size/is_image triple but no bytes / no base64 / no media_type / no upload affordance / no transport-ready payload despite is_image_path at line 5276 correctly classifying png/jpg/jpeg/gif/webp/bmp/svg extensions and the SendUserMessage/Brief tool surfacing isImage: true in JSON envelope (asserted at line 8969); build_chat_completion_request (openai_compat.rs:845) and translate_message (openai_compat.rs:946) have three-arm exhaustive matches over Text/ToolUse/ToolResult with no Image arm and no {type: "image", source: {type: "base64", media_type, data}} Anthropic-canonical wire shape and no {type: "image_url", image_url: {url: "data:image/...;base64,..."}} OpenAI-compat wire shape; the markdown renderer at render.rs:379-426 handles Tag::Image and TagEnd::Image for *output* rendering (asymmetric capability — model emits image markdown → rendered as colored [image:url] link, user attaches image → silent black hole at API boundary); the runtime's own worker_boot test fixture at worker_boot.rs:1324+:1349 literally hard-codes "Explain this KakaoTalk screenshot for a friend" as the canonical task-classification example for worker prompt-mismatch recovery — claw-code uses screenshot analysis as a runtime-classifier signal while having zero capability to actually send a screenshot to the model; TUI-ENHANCEMENT-PLAN.md:57 backlogs the gap as "No image/attachment preview" but the gap is far worse than no preview — there is no transport, no codec, no envelope, no anything from the byte stream to the wire (Jobdori cycle #372 / extends #168c emission-routing audit / sibling-shape cluster grows to nineteen: #201/#202/#203/#206/#207/#208/#209/#210/#211/#212/#213/#214/#215/#216/#217/#218/#219/#220 / wire-format-parity cluster grows to ten: #211+#212+#213+#214+#215+#216+#217+#218+#219+#220 / capability-parity cluster (strict-superset including user-facing surfacing): #218+#220 / five-layer-structural-absence shape (data-model-variant + slash-command-parse-arm + attachment-metadata-threading + request-builder-translation + OS-integration-helper) is the largest single feature absence yet catalogued, exceeding #218's four-layer; advertised-but-unbuilt shape is novel — UX-layer cousin of #219's false-positive-opt-in shape — applicable to other STUB_COMMAND entries with capability-claim summaries / claw-code is the sole client/agent/CLI in the surveyed coding-agent ecosystem with zero image-input capability despite Anthropic Vision GA on 2024-03-04 (25 months ago at filing time, default-on for all Claude 3.5+ models with 5MB-per-image / 32MB-per-request / 100-images-per-request limits) and OpenAI Vision GA on 2024-05-13 (23 months ago) and Google Gemini multimodal GA on 2024-02-15 (26 months ago), making this a regression against the upstream claude-code CLI claw-code is porting from / external validation: Anthropic Vision API reference at platform.claude.com/docs/en/build-with-claude/vision documenting the canonical {type, source: {type, media_type, data}} content block, Anthropic Messages API reference, Anthropic Files API beta with file_id reference for repeated-image-use efficiency, AWS Bedrock prompt-caching docs with image-block coverage and 20-images-per-request stricter limit and same cachePoint:{} pattern from #219, OpenAI Vision API reference documenting the {type:image_url, image_url:{url}} data-URL shape used by GPT-4o/4o-mini/5-vision/o1-vision/o3-vision/DeepSeek-VL2/Qwen-VL/QwQ-VL/MiniMax-VL/Moonshot kimi-VL, Google Gemini multimodal API documenting {inline_data:{mime_type, data}} shape, anomalyco/opencode#16184 (look_at tool image-file-from-disk handling bug), anomalyco/opencode#15728 (Read tool image-handling bug), anomalyco/opencode#8875 (custom-provider attachment-allowlist gap), anomalyco/opencode#17205 (text-only-model token-burn on image attachment) — all four are integration-quality gaps in opencode while claw-code is missing the capability entirely (~85% vs 0% parity asymmetry, the largest in the cluster), charmbracelet/crush vision-input via terminal paste, simonw/llm --attachment flag, Vercel AI SDK experimental_attachments + image content blocks, LangChain HumanMessage content blocks, LangGraph image-message routing, OpenAI Python and Anthropic Python SDK first-class image-typed messages, anthropic-quickstarts vision examples, claude-code official CLI paste-image and screenshot shortcuts (the upstream this is a regression against), OpenTelemetry GenAI semconv gen_ai.input.attachments and gen_ai.input.images.count multimodal observability attributes, IANA MIME-type registry RFC 4288/4289)
2858aeccff roadmap: #219 filed — Anthropic prompt-caching opt-in is structurally impossible: cache_control marker has zero codebase footprint (rg returns 0 hits across rust/ src/ docs/ tests/) despite the wire-side beta header 'prompt-caching-scope-2026-01-05' being unconditionally enabled at every Anthropic request (telemetry/lib.rs:16,452,469 + anthropic.rs:1443); five cacheable surfaces are uniformly locked: pub system: Option<String> at types.rs:11 is a flat string with no array form so no system-block cache_control slot exists; InputContentBlock variants Text/ToolUse/ToolResult at types.rs:80-99 have no cache_control field; ToolResultContentBlock variants Text/Json at types.rs:100-103 have no cache_control field; ToolDefinition at types.rs:105-110 has no cache_control field; openai_compat path translate_message at openai_compat.rs:946 and build_chat_completion_request at openai_compat.rs:850 emit flat-string system+content with no cache_control or Bedrock cachePoint translation; ~600 LOC of response-side cache stats infrastructure (prompt_cache.rs PromptCacheStats / PromptCacheRecord / PromptCache trait) accumulates a zero stream because no payload was opted in, and four hardcoded zero-coercion sites (openai_compat.rs:477-478, 489-490, 597-598, 1211-1212) discard upstream cache stats from Bedrock/Vertex/kimi-anthropic-compat/MiniMax-relay even when emitted; integration test at client_integration.rs:88-89 asserts the beta header is sent but no companion test asserts payload contains a cache_control marker because the data structures cannot produce one — a uniquely paradoxical false-positive opt-in shape: wire signal advertises caching intent and data-model structurally precludes it (Jobdori cycle #371 / extends #168c emission-routing audit / sibling-shape cluster grows to eighteen: #201/#202/#203/#206/#207/#208/#209/#210/#211/#212/#213/#214/#215/#216/#217/#218/#219 / wire-format-parity cluster grows to nine: #211+#212+#213+#214+#215+#216+#217+#218+#219 / cost-parity cluster grows to seven: #204+#207+#209+#210+#213+#216+#219 — #219 is the dominant cost-parity miss, ~90% input-token-cost reduction unattainable / cache-parity request/response symmetry pair: #219 (request-side opt-in absent) + #213 (response-side stats absent on openai-compat lane) / five-surface uniform-structural-absence shape: system+tools+tool_choice+messages+tool_result_content all locked, with no extra_body escape hatch since cache_control is a per-block annotation not a top-level field / false-positive-opt-in shape: novel cluster member where wire signal says yes and structure says no / external validation: Anthropic prompt-caching reference at platform.claude.com/docs/en/build-with-claude/prompt-caching documenting cache_control: {type: ephemeral} on system/tools/messages/content blocks with 5-min default TTL and 1-hour optional TTL and 90% cost reduction on cache-read tokens, Anthropic Messages API reference documenting system: Vec<SystemBlock> array form as the cacheable shape, Bedrock prompt-caching docs documenting cachePoint: {} block form for Bedrock-anthropic relay, claudecodecamp.com analysis of how prompt caching actually works in Claude Code, xda-developers article documenting claude-code's cache-token-budget knob proving caching is actively engaged, anomalyco/opencode#5416 #14203 #16848 #17910 #20110 #20265 (cache-related issues and PR for system-prompt-split-for-cache-hit-rate optimization), opencode-anthropic-cache npm package as third-party plugin proving the ecosystem expectation, LangChain anthropicPromptCachingMiddleware as first-class JS wrapper, LiteLLM prompt-caching docs with single-line cache_control pass-through for Anthropic+Bedrock, Vercel AI SDK Anthropic provider providerOptions.anthropic.cacheControl, prompthub.us multi-provider comparison treating opt-in as documented baseline, portkey.ai gateway-level pass-through, mindstudio.ai cost-impact analysis, OpenTelemetry GenAI semconv gen_ai.usage.input_tokens.cached as documented attribute — claw is the sole client/agent/CLI in the surveyed coding-agent ecosystem with zero cache_control request-side opt-in capability despite shipping the eligibility beta header on every Anthropic request)
116a95a253 roadmap: #218 filed — MessageRequest has no response_format / output_config / seed / logprobs / top_logprobs / logit_bias / n / metadata fields (types.rs:6-36, thirteen fields, zero hits across rust/ for any of these); build_chat_completion_request (openai_compat.rs:845) writes thirteen optional fields and emits none of these on the wire; AnthropicClient::send_raw_request (anthropic.rs:466) renders same MessageRequest via render_json_body (telemetry/lib.rs:107) with same gaps; ChatMessage (openai_compat.rs:688) has three fields (role, content, tool_calls) and no refusal field despite the streaming-aggregator test at line 1781 explicitly including "refusal": null in test data — silent serde drop; ChunkDelta (openai_compat.rs:735) has same gap; OutputContentBlock (types.rs:147) has four variants (Text, ToolUse, Thinking, RedactedThinking) and no Refusal variant; MessageResponse.stop_reason (types.rs:127) has no slot for Anthropic's 2025-11+ stop_reason='refusal' value; net effect: claw cannot opt into OpenAI strict-schema constrained decoding (response_format json_schema, GA 2024-08), cannot opt into Anthropic GA structured outputs (output_config.format, GA 2025-11-13), cannot opt into legacy JSON mode (response_format json_object), cannot supply seed for reproducible sampling, cannot request logprobs/top_logprobs, cannot bias tokens via logit_bias, cannot request multiple completions via n, and silently discards every refusal string OpenAI emits when constrained decoding rejects a generation — refusals classified as Finished/success with empty content via #217 normalize_finish_reason mapping (Jobdori cycle #370 / extends #168c emission-routing audit / sibling-shape cluster grows to seventeen: #201/#202/#203/#206/#207/#208/#209/#210/#211/#212/#213/#214/#215/#216/#217/#218 / wire-format-parity cluster grows to eight: #211+#212+#213+#214+#215+#216+#217+#218 / four-layer-structural-absence shape: request-struct-field + request-builder-write + response-struct-field + content-block-taxonomy-variant, largest single-feature absence catalogued / external validation: OpenAI Structured Outputs guide, OpenAI Chat Completions API reference, Anthropic structured-outputs reference (GA 2025-11-13), Anthropic Messages API reference (stop_reason='refusal'), Vercel AI Gateway Anthropic structured outputs, Vercel AI SDK 6 generateObject + Zod, LangChain with_structured_output, simonw/llm --schema flag, charmbracelet/crush, anomalyco/opencode#10456 open feature request citing OpenAI Codex as reference, anomalyco/opencode#5639/#11357/#13618, OpenAI Codex CI/code-review cookbook, OpenRouter structured-outputs docs, OpenAI Python SDK client.beta.chat.completions.parse, OpenTelemetry GenAI semconv gen_ai.request.response_format + gen_ai.response.refusal)
91e290526a roadmap: #217 filed — normalize_finish_reason (openai_compat.rs:1389) is a two-arm match (stop→end_turn, tool_calls→tool_use) with a string-passthrough fallthrough that drops three of five OpenAI-spec finish reasons (length, content_filter, function_call); MessageResponse.stop_reason is Option<String> with no enum constraint; WorkerRegistry::observe_completion (worker_boot.rs:558) classifies failure on finish=='unknown'||finish=='error' only, so OpenAI/DeepSeek/Moonshot truncation (length) and content-policy refusal (content_filter) become WorkerStatus::Finished with success events; the streaming aggregator's tool-call-block-close branch at openai_compat.rs:537 keys on 'tool_calls' literal and never fires for legacy 'function_call' shape (Azure pre-2024-02-15 / DeepSeek pre-2025-08 / SiliconFlow / OpenRouter relays); Anthropic native path produces the canonical taxonomy correctly (Jobdori cycle #369 / extends #168c emission-routing audit / sibling-shape cluster grows to sixteen: #201/#202/#203/#206/#207/#208/#209/#210/#211/#212/#213/#214/#215/#216/#217 / wire-format-parity cluster grows to seven: #211+#212+#213+#214+#215+#216+#217 / classifier-leakage shape: response-side string mistranslation flows three layers deep into runtime classifier with two-literal-compare coverage / external validation: OpenAI Chat Completions API reference, Anthropic Messages API reference, OpenAI function_call deprecation notice, Azure OpenAI reference, DeepSeek/Moonshot/DashScope refs, anomalyco/opencode#19842, charmbracelet/crush typed enum, simonw/llm Reason enum, Vercel AI SDK FinishReason union, LangChain LengthFinishReasonError/ContentFilterFinishReasonError, semantic-kernel FinishReason enum, openai-python Literal type, OpenTelemetry GenAI gen_ai.response.finish_reasons spec)
ceb092abd7 roadmap: #216 filed — neither MessageRequest nor MessageResponse has any service_tier field; build_chat_completion_request (openai_compat.rs:845) writes thirteen optional fields (model, max_tokens/max_completion_tokens, messages, stream, stream_options, tools, tool_choice, temperature, top_p, frequency_penalty, presence_penalty, stop, reasoning_effort) and does not write service_tier; AnthropicClient::send_raw_request (anthropic.rs:466) renders the same MessageRequest struct via AnthropicRequestProfile::render_json_body (telemetry/lib.rs:107) which has no field for it either, only a per-client extra_body escape hatch (asymmetric — openai_compat path has zero hits for extra_body); ChatCompletionResponse / ChatCompletionChunk / OpenAiUsage all deserialize four fields each, dropping the upstream-echoed service_tier confirmation and the system_fingerprint reproducibility marker that OpenAI documents as the canonical "what backend served you" signal; claw cannot opt into OpenAI flex (~50% cheaper async batch — developers.openai.com/api/docs/guides/flex-processing), cannot opt into OpenAI priority (~1.5-2x premium SLA latency — developers.openai.com/api/docs/guides/priority-processing), cannot opt into Anthropic priority (auto/standard_only — platform.claude.com/docs/en/api/service-tiers), and cannot detect at the response layer whether a request was flex-served or silently upgraded to priority by a project-level default override (Jobdori cycle #368 / extends #168c emission-routing audit / sibling-shape cluster grows to fifteen: #201/#202/#203/#206/#207/#208/#209/#210/#211/#212/#213/#214/#215/#216 / wire-format-parity cluster grows to six: #211+#212+#213+#214+#215+#216 / cost-parity cluster grows to six: #204+#207+#209+#210+#213+#216 / three-dimensional-structural-absence shape: request-side write + response-side read + reproducibility marker, distinct from prior request-only #211#212 / response-only #207#213#214 / header-only #215 members / external validation: OpenAI flex/priority/scale-tier guides, OpenAI advanced-usage system_fingerprint guide, Anthropic service-tiers reference, OpenTelemetry GenAI semconv gen_ai.openai.request.service_tier + gen_ai.openai.response.service_tier + gen_ai.openai.response.system_fingerprint, anomalyco/opencode#12297, Vercel AI SDK serviceTier provider option, LangChain ChatOpenAI service_tier ctor param, LiteLLM service_tier pass-through, semantic-kernel OpenAIPromptExecutionSettings.ServiceTier, openai-python SDK client.chat.completions.create(service_tier=...) first-class kwarg, MiniMax/DeepSeek Anthropic-compat layer notes, badlogic/pi-mono#1381)
Compare 15 commits »
d0zingcat synced commits to feat/jobdori-168c-emission-routing at d0zingcat/claude-code from mirror 2026-04-25 08:26:44 +00:00
dba4f281f0 roadmap: #205 filed — prunable worktree lifecycle audit trail missing, no creation timestamp, pinpoint ID, or doctor visibility (Q *YeonGyu Kim cycle #137 / Jobdori cycle #351)
1c59e869e0 roadmap: #204 filed — TokenUsage omits reasoning_tokens, reasoning models merge into output_tokens breaking cost parity (anomalyco/opencode #24233 parity gap, Jobdori cycle #336)
Compare 2 commits »
d0zingcat synced commits to main at d0zingcat/d0zingcat from mirror 2026-04-25 07:26:44 +00:00
b5030687dd Updated with Dev Metrics
77ec8487af Update update_hi.yaml
efdbe106fc enable wakatime stats
0188ac7f14 Update README.md
Compare 4 commits »
d0zingcat synced commits to feat/jobdori-168c-emission-routing at d0zingcat/claude-code from mirror 2026-04-25 00:26:43 +00:00
604bf389b6 roadmap: #203 filed — AutoCompactionEvent summary-only, no SSE event emitted mid-turn when auto-compaction fires (Jobdori cycle #136)
0730183f35 roadmap: #202 filed — sanitize_tool_message_pairing silent drop, no tool_message_dropped event (Jobdori cycle #135)
5e0228dce0 roadmap: #201 filed — parse_tool_arguments silent fallback, no tool_arg_parse_error event (Jobdori cycle #134)
b780c808d1 roadmap: #200 filed — SCHEMAS.md self-documenting drift, no derive-from-source enforcement (Q *YeonGyu Kim cycle #304)
Compare 4 commits »
d0zingcat synced commits to feat/jobdori-168c-emission-routing at d0zingcat/claude-code from mirror 2026-04-24 16:16:44 +00:00
6948b20d74 roadmap: #199 filed — claw config JSON envelope omits deprecated_keys, merged_keys count-only, no automation path (Jobdori cycle #133)
d0zingcat synced commits to feat/jobdori-168c-emission-routing at d0zingcat/claude-code from mirror 2026-04-24 08:16:43 +00:00
c48c9134d9 roadmap: #198 filed — MCP approval-prompt opacity, no blocked.mcp_approval state, pane-scrape required (gaebal-gajae cycle #135 / Jobdori cycle #248)
215318410a roadmap: #197 filed — enabledPlugins deprecation no migration path, warning on every invocation (Jobdori cycle #132)
Compare 2 commits »
d0zingcat synced commits to feat/jobdori-168c-emission-routing at d0zingcat/claude-code from mirror 2026-04-24 00:06:45 +00:00
59acc60eb5 roadmap: Doctrine #35 formalized — disk-truth wins over verbal drift during taxonomy disputes (Jobdori cycle #194)
3497851259 roadmap: #196 filed — local branch namespace accumulation, no lifecycle cleanup or doctor visibility (Jobdori cycle #131)
d93957de35 roadmap: #195 filed — worktree-age opacity, no timestamp or doctor signal (Jobdori cycle #130)
86e88c2fcd roadmap: #194 filed — prunable-worktree accumulation, no doctor visibility or auto-prune lifecycle
Compare 4 commits »
d0zingcat synced new reference feat/jobdori-168c-emission-routing to d0zingcat/claude-code from mirror 2026-04-23 07:46:44 +00:00