Drop legacy runtime paths and role assignments across backend/frontend, and add upgrade-safe DB migration for existing installs. This aligns config, docs, tests, and UI to the agent-first architecture with codex as the only alternate engine.
- Replace default Vite favicon and title with project-specific branding
- Add axios response interceptor to handle 401 by clearing token and redirecting to login
- Move health check endpoint from '/' to '/api/health' so SPA index.html is served on root
- Integrate next-themes ThemeProvider with system preference detection and manual toggle
- Update docker-compose and k8s health check paths accordingly
- Replace hardcoded dark-only colors with semantic CSS variable tokens for theme compatibility
- Delete snapshot refs (refs/reviewed/pr/{n}/*) when PR is closed or merged
- Add daily 2:00 AM scheduled cleanup for mirrors/workspaces older than 3 days
- Expose deleteReviewedRefs, getMirrorPath, cleanStaleMirrors on LocalRepoManager
Save baseSha + headSha as git refs (refs/reviewed/pr/{n}/base and
refs/reviewed/pr/{n}/head) after each successful PR review. On
subsequent reviews, compare saved baseSha with current baseSha to
decide incremental (two-dot diff) vs full (three-dot diff). Falls
back to full review only when PR base changes (rebase scenario).
Protects custom refs from fetch --prune via negative refspec.
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)
Add a new Codex-based review engine that runs OpenAI Codex CLI in
full-auto mode with a Streamable HTTP MCP server providing Gitea
review tools (get_pr_info, add_review_comment, add_review_summary,
get_file_content). Includes incremental review support via
lastReviewedHead in MCP context and review prompt.
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)
Remove the per-provider listModels API (GET /providers/:id/models) and all
four provider implementations (OpenAI Compatible, OpenAI Responses, Anthropic,
Gemini). ModelCombobox now only shows tokenlens suggestions (tagged '推荐') plus
free-form custom input — no more unfiltered 'API' models from provider SDKs.
Fixes: switching provider type in ProviderDialog no longer shows stale models
from the original provider's API.
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)
Add GET /llm/model-suggestions endpoint that maps ProviderType to models.dev
provider keys and returns chat model IDs from the tokenlens catalog. Lazy-loads
catalog on first request to avoid empty results when engine hasn't started.
Frontend ModelCombobox now fetches suggestions via useQuery with 30min cache
instead of reading from hardcoded MODEL_SUGGESTIONS constant.
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)
Implement TriageAgent with heuristic fast path (skip trivial changes like
lockfiles, CI configs, docs-only) and LLM fallback via chatForRole('planner').
Orchestrator now runs triage before specialist dispatch, only invoking agents
for relevant domains instead of all 4 specialists on every change.
Uses the pre-reserved 'planner' model role that was defined in DB schema and
frontend UI but never wired to backend logic.
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)
Add LLMSemaphore for concurrency control (default 4) and retryWithBackoff
with exponential backoff respecting 429 retryAfterSeconds. Wrap all
LLMGateway calls (chatForRole, chatDirect, embedForRole) via withResilience.
New config fields: LLM_MAX_CONCURRENT_CALLS, LLM_RETRY_MAX_ATTEMPTS,
LLM_RETRY_BASE_DELAY_MS, ENABLE_TRIAGE.
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)
After migrating config to DB, values changed via Web UI were not picked
up by consumers that cached config at module load time.
- gitea.ts: replace static axios.create() with request interceptors that
read config.gitea.apiUrl and accessToken on every request
- feishu.ts: remove constructor caching of webhookUrl/webhookSecret,
read from config.feishu.* on each sendMessage() call
- engine.ts: create SandboxExec/LocalRepoManager/DiffExtractor/Orchestrator
per review run instead of once at class init, so workdir/token/limits
always reflect current config. FileReviewStore stays singleton (has state).
- index.ts: wrap JWT middleware in per-request handler so config.admin.jwtSecret
is read dynamically instead of captured once at startup
Replace env-var based config with DB-first approach (Portainer model).
Only PORT, DATABASE_PATH, and MASTER_KEY_PATH remain as env vars.
All other settings (Gitea, Feishu, security, review engine, memory) are
managed through the Admin Dashboard Web UI backed by system_settings table.
- ConfigManager rewrites getRawValue() to read from settingsRepo with
fallback to compiled-in defaults (no more process.env reads)
- seedDefaults() auto-generates JWT_SECRET and WEBHOOK_SECRET on first boot
- getSource() returns 'db' | 'default' (removed 'env' source type)
- Merged 'app'+'admin' config groups into 'security' group
- Removed PORT from CONFIG_FIELDS (env-var only)
- Removed readonly/readonlyWarning from all field definitions
Fix 13 pre-existing test failures caused by SpecialistAgent constructor
signature change during LLMGateway migration. Replace raw OpenAI client
mock with gateway mock returning normalized LLMChatResponse objects.
Update assertions for gateway request format (responseFormat, providerOptions)
and LLMMessage shape (toolCallId instead of tool_call_id).
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)
Strip OpenAI-specific settings (apiKey, baseUrl, model) and per-role model
overrides from config schema — these are now managed through the database
via the LLM provider UI. Simplify config-manager and its tests accordingly.
Keep only runtime settings (port, webhookSecret, etc.) in env/config.
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)
Replace all direct OpenAI client usage in review agents, orchestrator,
learning system, and AI review service with the new LLMGateway abstraction.
Agents now call gateway.chatForRole() instead of openai.chat.completions.create(),
enabling multi-provider support across all review workflows. Add getAll()
method to ToolRegistry for provider capability checking.
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)
Add REST endpoints under /admin/api/llm/ for provider CRUD, API key
management, role assignments, connection testing, and model listing.
Register routes in index.ts with JWT authentication middleware. Initialize
master key and database on server startup.
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)
Add bun:sqlite-based database with automatic migration system. Includes
repositories for LLM providers (CRUD), model-role assignments, encrypted
API key secrets (AES-256-GCM via master.key), and system settings.
Single-file DB at data/assistant.db.
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)
Introduce provider-agnostic LLM gateway supporting 4 provider types:
OpenAI Compatible, OpenAI Responses API, Anthropic Messages API, and
Google Gemini API. Each provider normalizes to a unified LLMChatResponse
format with tool call support. Includes AES-256-GCM encrypted secret
management for API keys and a tool-converter for cross-provider tool
format translation.
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)
Add GLOBAL_PROMPT config field that appends user-defined instructions to
every LLM system message across all 9 call sites (legacy engine, agent
specialist, reflexion, critic, and debate orchestrator).
Configured via admin dashboard (auto-rendered from CONFIG_FIELDS metadata)
or GLOBAL_PROMPT env var. Example use: "请始终使用中文回复".
Changes:
- Add GLOBAL_PROMPT to Zod schema, AppConfig interface, and buildConfig
- Add CONFIG_FIELDS metadata (group: openai, type: text)
- Add getEffectiveValue switch case
- Add withGlobalPrompt() helper in src/utils/global-prompt.ts
- Inject into all LLM call sites via withGlobalPrompt wrapper
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)
Frontend sends entire form state including readonly fields (PORT,
WEBHOOK_SECRET, JWT_SECRET). Previously the backend rejected the whole
request. Now readonly fields are silently skipped.
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)
Atomic rename (temp→target) fails on K8s volumes with EBUSY/EXDEV/EROFS.
Fall back to direct writeFile when rename fails, with best-effort
cleanup of orphaned temp files.
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)
- Add complete finding JSON schema (all required fields) to both legacy
and ReAct system prompts to prevent malformed responses
- Change JSON parse error handling from break (abandon review) to
injecting a guidance message that prompts the model to return valid JSON
- Add global prompt injection support via withGlobalPrompt helper
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)
The --type-add and --type options were placed after the path argument,
causing ripgrep to treat them as additional paths rather than flags.
Moved option flags before the -e pattern and path arguments.
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)
Remove all isDev logic from review controller and config manager.
The isDev check treated missing NODE_ENV as development, causing
production to use a hardcoded fake commit SHA and skip real reviews.
Config validation now always fails fast on invalid configuration.
- Add @biomejs/biome as dev dependency
- Remove deprecated tslint dependency
- Add biome.json with project-specific rules
- Update lint script to use Biome
- Apply Biome auto-fixes across codebase
Add /admin/api/config routes for runtime configuration:
- GET /: Retrieve all config groups with field metadata and values
- PUT /: Validate and persist configuration overrides
- POST /reset: Reset specified keys to defaults (remove overrides)
Features:
- Sensitive field masking (passwords, secrets, API keys)
- Field validation (URL, enum, number range, boolean)
- Readonly field protection
- Grouped field organization with metadata