-
Notifications
You must be signed in to change notification settings - Fork 13
Release v0.2.1 #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add CHANGELOG entry for v0.2.1 (unreleased) - Add GitHub CI workflow (build + test) - Add PR template - Add CODEOWNERS file - Add branch protection documentation 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
- Enable automated PR reviews for all pull requests - Configure path filters to ignore dist/node_modules - Enable high-level summaries and review status 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
…nated with orchestrator
WalkthroughIntroduces orchestrator wakeup flow (worker → bridge → message-bus → orchestrator) with endpoints, history and tools; adds vision analyzer and image routing, repo-context utilities, worker-pool/spawner redesign, passthrough and progress systems, memory injection/trim features, schema/config/profile updates, new UX tools, tests, and governance/CI/docs. Changes
Sequence Diagram(s)sequenceDiagram
actor Worker
participant Bridge as BridgeServer (/v1/wakeup)
participant MB as MessageBus
participant Orch as Orchestrator (Session)
participant Tools as UI/Toasts
Worker->>Bridge: POST /v1/wakeup { workerId, reason, jobId?, summary?, data? }
Bridge->>MB: messageBus.wakeup(payload + timestamp)
MB->>MB: dedupe/store capped WakeupEvent history
MB->>Orch: notify onWakeup listeners (async)
Orch->>Orch: check ui.wakeupInjection, cooldown, session context
alt injection allowed
Orch->>Orch: inject resume/wakeup prompt into active session (promptAsync)
Orch->>Tools: show toast + notification
else injection skipped
Orch->>Tools: show toast only
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (3)
CHANGELOG.md (1)
8-19: Consider documenting the runtime shutdown fix.The commit messages mention "fix issue with instance tracking causing sessions not to be terminated with orchestrator." This behavioral change (graceful shutdown orchestration with timeout handling) should be captured under the
### Fixedsection before release.🔎 Suggested entry
### Fixed -- _Document bug fixes here_ +- Fixed instance tracking issue causing sessions not to terminate properly with orchestrator shutdown.github/workflows/ci.yml (2)
15-17: Consider pinning the Bun version for reproducible CI.Using
bun-version: latestmay cause unexpected CI failures if Bun releases a breaking change. Consider pinning to a specific version for stability.🔎 Suggested change
- uses: oven-sh/setup-bun@v2 with: - bun-version: latest + bun-version: "1.1.38" # or your current versionAlso applies to: 33-35
9-41: Optional: Reduce duplication by running build and test in a single job.Both jobs independently checkout and install dependencies. For a smaller project, combining them into one job reduces CI time (no duplicate install) while still providing clear step-level visibility.
This is purely optional—separate jobs provide better parallelism and clearer failure isolation.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
.coderabbit.yaml.github/BRANCH_PROTECTION.md.github/CODEOWNERS.github/pull_request_template.md.github/workflows/ci.ymlCHANGELOG.mdsrc/core/runtime.ts
🧰 Additional context used
🧬 Code graph analysis (1)
src/core/runtime.ts (1)
src/core/registry.ts (1)
registry(214-214)
🪛 LanguageTool
.github/pull_request_template.md
[style] ~7-~7: Consider using a different verb for a more formal wording.
Context: ...- [ ] Bug fix (non-breaking change that fixes an issue) - [ ] New feature (non-breaki...
(FIX_RESOLVE)
🪛 markdownlint-cli2 (0.18.1)
.github/BRANCH_PROTECTION.md
79-79: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (4)
.github/pull_request_template.md (1)
1-26: LGTM!Well-structured PR template covering essential sections. The checklist appropriately references the project's tooling (
bun test)..github/BRANCH_PROTECTION.md (1)
1-111: LGTM overall!Comprehensive branch protection documentation with clear setup instructions, CLI commands for automation, and alignment with the CI workflow's
buildandtestjob names.src/core/runtime.ts (2)
18-47: LGTM - Well-designed graceful shutdown with timeout fallback.The shutdown orchestration properly:
- Uses promise deduplication to ensure idempotent shutdown
- Races graceful shutdown against a 6-second timeout
- Falls back to SIGKILL for unresponsive workers
- Closes the bridge server
One minor note: after the timeout, SIGKILL is sent to all captured workers regardless of whether they already shut down gracefully. This is harmless (the error is caught) but slightly wasteful. Consider tracking which workers completed shutdown if this becomes a performance concern.
64-78: LGTM - Signal handling is correct.The signal handlers properly:
- Use correct exit codes (130 for SIGINT, 143 for SIGTERM per POSIX conventions)
- Prevent double-handling via
shutdownRequestedflag- Ensure cleanup runs before exit via async IIFE
- Handle
beforeExitfor normal process termination
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Nitpick comments (8)
docs/configuration.md (1)
41-41: Minor: Hyphenate compound adjective.The heading "Full Featured Config" should be "Full-Featured Config" when the compound adjective precedes the noun.
Suggested fix
-### Full Featured Config +### Full-Featured Configsrc/index.ts (1)
318-322: Consider session selection strategy for wakeup injection.The current implementation always updates
currentOrchestratorSessionIdto the most recent orchestrator agent message, meaning wakeup injections will target the latest active orchestrator session. This could lead to injections into unintended sessions if a user has multiple orchestrator sessions or switches contexts quickly.Consider if the current "last-wins" approach is intended, or if a more sophisticated session selection strategy (e.g., matching jobId to originating session) would be more appropriate.
src/core/runtime.ts (2)
19-50: Graceful shutdown with timeout looks solid.The race pattern between
shutdownAllWorkers()and the timeout is appropriate. A few observations:
Line 23: The snapshot of workers is taken before
shutdownAllWorkers(), but workers could theoretically be added during shutdown. This is likely fine since shutdown is typically terminal.Lines 30-40: The force-kill loop iterates over the original
workerssnapshot but checksworker.shutdownto identify workers that have a shutdown method. The conditionworker.shutdown && typeof worker.pid === "number"seems inverted for the intent—workers with a shutdown method are being force-killed, but those are the ones that should have been gracefully shut down. Consider checking for workers that are still alive instead.🔎 Consider checking if worker process is still alive before force-killing
if (!finished) { for (const worker of workers) { - if (worker.shutdown && typeof worker.pid === "number") { + if (typeof worker.pid === "number") { try { // Try process-group kill first (covers grand-children if worker was spawned detached). if (process.platform !== "win32") process.kill(-worker.pid, "SIGKILL"); else process.kill(worker.pid, "SIGKILL"); } catch { // ignore } } } }
177-229:cleanupOrphanedServeProcessesperforms system-wide process scanning—verify safety.The
ps auxparsing approach is fragile:
Line 192: The pattern
"opencode serve --hostname"might not match if arguments are reordered or have different formatting.Line 197: Assumes
ps auxoutput format where PID is in column index 1. This may vary across Unix variants (e.g., BSD vs GNU).Lines 206-224: The logic skips processes with live parents, but a process could be adopted by init (PID 1) if its parent died—the current check at line 215 (
ppid > 1) handles this correctly.Consider using
pgrepfor more reliable process matching, or document the expectedpsoutput format:pgrep -f "opencode serve --hostname"src/core/message-bus.ts (1)
99-112: Non-null assertion ingetWakeupHistoryis safe but could be cleaner.Line 107:
options.after!uses a non-null assertion inside a block guarded byif (options?.after). This is safe since the condition guaranteesafteris truthy, but TypeScript's narrowing doesn't propagate into the callback.🔎 Extract to a const for cleaner code
if (options?.after) { - events = events.filter((e) => e.timestamp > options.after!); + const afterTs = options.after; + events = events.filter((e) => e.timestamp > afterTs); }src/workers/spawner.ts (2)
73-89:checkWorkerBridgeToolshas fragile SDK error handling.Lines 78-85: The error extraction chain (
sdkError?.data?.message ?? sdkError?.message ?? ...) is defensive but verbose. The SDK's error shape should ideally be typed.Line 77: The cast
as anyon the query parameter suggests a type mismatch with the SDK. This could hide legitimate type errors.Consider creating a typed error extraction helper if this pattern is used elsewhere in the codebase.
793-803: Task text assembly injects source context and job instructions.The
<message-source>and<orchestrator-job>tags provide structured metadata for the worker. Thefromfield defaults to"orchestrator"which is appropriate.One consideration: Line 802 appends the job instructions after the message, which could be missed if the message is very long. Consider prepending critical instructions.
src/ux/repo-context.ts (1)
42-77: Git info gathering uses synchronousexecSyncwhich blocks the event loop.For a utility called during worker spawning, blocking calls could impact responsiveness. However, since this is called with
.catch(() => undefined)in the spawner, failures are handled gracefully.Consider adding timeouts to prevent hanging on slow git operations:
🔎 Add timeout to execSync calls
const branch = execSync("git rev-parse --abbrev-ref HEAD", { cwd: directory, encoding: "utf8", stdio: ["pipe", "pipe", "pipe"], + timeout: 5000, }).trim();
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (21)
README.mddocs/configuration.mddocs/examples.mddocs/quickstart.mddocs/troubleshooting.mdschema/orchestrator.schema.jsonsrc/config/orchestrator.tssrc/config/profiles.tssrc/core/bridge-server.tssrc/core/device-registry.tssrc/core/message-bus.tssrc/core/process-metrics.tssrc/core/runtime.tssrc/index.tssrc/tools/index.tssrc/tools/tools-ux.tssrc/tools/tools-workers.tssrc/types/index.tssrc/ux/repo-context.tssrc/worker-bridge-plugin.mjssrc/workers/spawner.ts
🧰 Additional context used
🧬 Code graph analysis (7)
src/tools/tools-workers.ts (1)
src/workers/spawner.ts (1)
sendToWorker(762-882)
src/index.ts (1)
src/core/message-bus.ts (1)
messageBus(130-130)
src/tools/tools-ux.ts (4)
src/tools/index.ts (2)
orchestratorWakeups(57-57)clearWakeups(46-46)src/core/message-bus.ts (1)
messageBus(130-130)src/tools/state.ts (1)
getDefaultListFormat(45-47)src/tools/markdown.ts (1)
renderMarkdownTable(5-13)
src/tools/index.ts (1)
src/tools/tools-ux.ts (2)
orchestratorWakeups(488-545)clearWakeups(547-558)
src/core/message-bus.ts (1)
src/types/index.ts (1)
WakeupPayload(264-277)
src/core/bridge-server.ts (3)
src/worker-bridge-plugin.mjs (6)
url(4-4)res(13-17)res(28-31)res(84-84)res(108-114)workerId(6-6)src/core/registry.ts (1)
registry(214-214)src/core/message-bus.ts (1)
messageBus(130-130)
src/workers/spawner.ts (3)
src/helpers/format.ts (1)
getUserConfigDir(61-61)src/core/device-registry.ts (1)
removeWorkerEntriesByPid(99-104)src/ux/repo-context.ts (1)
getRepoContextForWorker(256-266)
🪛 LanguageTool
docs/configuration.md
[uncategorized] ~41-~41: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ..."workers": ["docs", "coder"] } ### Full Featured Config json { "$schema": "../node...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
docs/troubleshooting.md
[style] ~398-~398: Consider using a more formal alternative.
Context: ... ``` --- ## Debug Mode When you need more information to diagnose an issue: ### Enable Debug...
(MORE_INFO)
🪛 markdownlint-cli2 (0.18.1)
docs/quickstart.md
77-77: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
83-83: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
101-101: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
107-107: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
120-120: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
141-141: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
168-168: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
195-195: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
226-226: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
235-235: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
docs/examples.md
320-320: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
338-338: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
docs/troubleshooting.md
68-68: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
73-73: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
78-78: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
138-138: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
161-161: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
166-166: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
172-172: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
181-181: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
216-216: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
234-234: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
239-239: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
244-244: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
262-262: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
267-267: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
279-279: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
351-351: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
356-356: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
390-390: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
402-402: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
408-408: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
420-420: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
432-432: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
454-454: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
459-459: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (25)
README.md (1)
48-89: Excellent onboarding improvements!The new "Why Multiple AI Workers?" and "Prerequisites" sections significantly improve the first-time user experience. The quick verification commands and provider configuration example are particularly helpful.
src/config/orchestrator.ts (1)
97-97: LGTM: Follows established pattern.The wakeupInjection config parsing follows the same pattern as other boolean UI options (toasts, debug, logToConsole, etc.) and is type-safe.
src/config/profiles.ts (1)
39-39: LGTM: Repository context for docs worker.Adding
injectRepoContext: truefor the docs worker makes sense, as having repository context helps the documentation librarian provide more relevant research and examples.src/core/device-registry.ts (1)
11-12: LGTM: Well-documented field addition.The
hostPidfield is clearly documented and supports the orphan cleanup workflow. The optional nature makes this a backward-compatible change.schema/orchestrator.schema.json (1)
22-26: LGTM: Clear schema addition.The
wakeupInjectionproperty is well-documented with a clear description of its purpose. The default value oftrueenables the feature by default, which seems appropriate for the async worker wakeup functionality.src/worker-bridge-plugin.mjs (1)
89-118: LGTM: Well-designed wakeup tool.The
wakeup_orchestratortool is well-implemented with:
- Clear documentation of when to use it (result_ready, needs_attention, error, progress)
- Strongly-typed reason enum preventing arbitrary values
- Optional fields for flexibility (jobId, summary, data)
- Consistent error handling and response format
The implementation follows the established pattern used by
messageToolandinboxTool.src/core/bridge-server.ts (2)
103-134: LGTM: Well-structured wakeup endpoint.The
/v1/wakeupendpoint implementation is solid:
- Validates required fields (workerId, reason)
- Updates worker lastActivity timestamp
- Emits event through messageBus for proper decoupling
- Returns consistent response format matching other endpoints
- Follows established patterns from
/v1/messageand/v1/report
136-148: LGTM: Wakeup history endpoint mirrors inbox pattern.The
/v1/wakeup/historyendpoint correctly:
- Accepts optional filtering parameters (workerId, after, limit)
- Uses
Number.isFinitechecks for safe numeric parsing- Follows the same pattern as the
/v1/inboxendpoint- Returns consistent JSON structure
src/tools/tools-workers.ts (1)
109-109: LGTM: Worker-to-worker communication support added.The optional
fromparameter has been properly added to bothaskWorkerandaskWorkerAsynctools and correctly propagated tosendToWorker. This enables source identification for worker-to-worker communication flows, which aligns with the wakeup orchestration feature introduced in this release.Also applies to: 121-125, 153-153, 163-163
src/tools/tools-ux.ts (1)
488-558: LGTM: Well-designed wakeup management tools.The new
orchestratorWakeupsandclearWakeupstools provide a clean interface for viewing and managing worker wakeup events. The implementation includes proper filtering, formatting options, emoji indicators for different wakeup reasons, and sensible defaults. The integration withmessageBusis consistent with the related changes across the codebase.src/types/index.ts (1)
34-35: LGTM: Well-defined type extensions for wakeup feature.The new type definitions properly support the wakeup orchestration feature:
WakeupPayloadinterface provides a clear contract for wakeup events with appropriate required/optional fieldsinjectRepoContextflag enables selective repository context injection for workerswakeupInjectionUI configuration with sensible default (true) and helpful documentationworker:wakeupevent properly typed in the events unionThe types are well-documented and consistent with the existing codebase patterns.
Also applies to: 165-171, 263-277, 286-286
src/tools/index.ts (1)
46-46: LGTM: Wakeup tools properly exported and integrated.The new wakeup tools (
orchestratorWakeups,clearWakeups) are correctly exported, imported, and integrated into both the core and plugin tool sets. The naming conventions and organization are consistent with existing patterns.Also applies to: 57-57, 89-89, 100-100, 130-130, 168-170
src/index.ts (1)
93-96: LGTM: Wakeup subscription with proper cleanup.The wakeup subscription is properly initialized with a cooldown mechanism to prevent spam, and the cleanup is correctly registered in the server disposal handler. The cooldown and conditional injection logic provide good UX by preventing notification fatigue while ensuring important wakeups reach the orchestrator.
Also applies to: 357-357
src/core/runtime.ts (2)
71-107: Signal handling usesprependListenerappropriately to ensure cleanup runs first.However, there's a potential issue with the async signal handler:
Lines 76-79: The
handleSignalfunction spawns an async IIFE but doesn't prevent the process from continuing. If another signal arrives beforerunShutdowncompletes,shutdownRequestedwill be true andprocess.exit(code)will be called immediately at line 73, potentially before the first shutdown completes.Lines 84-102: The synchronous
exithandler attempts both SIGTERM and SIGKILL in sequence, but SIGKILL immediately after SIGTERM may not give processes time to handle SIGTERM gracefully.Consider whether the double signal (SIGTERM then immediate SIGKILL) in the exit handler is intentional—typically you'd want a delay between them.
113-160: Orphaned worker cleanup logic is comprehensive.The dual-path approach (hostPid-based for new entries, staleness-based for legacy entries) handles migration well.
One edge case: Line 141 uses
Math.max(...group.map(...))which returns-Infinityif all entries haveundefinedupdatedAt. Since-Infinitywould makenow - newestUpdatea large positive number, those entries would be cleaned up, which is likely the intended behavior.src/core/message-bus.ts (1)
56-85: Wakeup event emission with fire-and-forget async listeners is well-designed.The implementation correctly:
- Generates unique IDs with
randomUUID()- Bounds history with
MAX_WAKEUP_HISTORY- Handles both sync and async listeners
- Swallows errors to prevent one listener from affecting others
src/workers/spawner.ts (5)
43-69: Plugin resolution prioritizes user config directory appropriately.The use of
pathToFileURLfor both user config and bundled paths ensures consistent handling. The fallback chain (user config → dist location → src location) is reasonable.
146-151: Tool check on reuse path correctly disposes workers with missing tools.The cleanup sequence (dispose → remove registry entries → return undefined) ensures stale workers don't persist. Good defensive coding.
227-235: Detached process spawning on non-Windows enables process group management.Setting
detached: trueallows the orchestrator to kill the entire process tree using negative PID. This is correctly paired with the enhancedclose()function.
289-323: Enhanced process termination handles detached workers well.The close function:
- Tries process-group SIGTERM first (lines 293-294)
- Falls back to direct kill on error (lines 299-300)
- Escalates to SIGKILL after 2s timeout (lines 306-312)
This is a solid graceful shutdown pattern.
562-565: Capabilities JSON is well-formed but consider escaping.The
JSON.stringifyoutput is embedded directly in the prompt text. While boolean values are safe, ifresolvedProfilegains string properties in the future, special characters could cause issues.src/ux/repo-context.ts (4)
79-126: Directory structure enumeration is well-designed with priority sorting.The priority list covers common project layouts. The exclusions (node_modules, dist, build, hidden files except .github) are appropriate.
One edge case: Line 109 checks
result.length >= maxItemsbut the loop continues to the next iteration rather than breaking immediately after this check. This is correct behavior sincebreakfollows on the next line.
143-154: Package.json parsing handles errors gracefully.The try-catch around JSON parsing prevents malformed package.json from breaking the context gathering. Silently ignoring errors is appropriate here since this is supplementary context.
207-219: Package.json summary truncates lists to 10 items.The slicing with ellipsis suffix (
scripts.length > 10 ? "..." : "") provides clear indication of truncation. This keeps the context manageable.
256-266:getRepoContextForWorkerprovides sensible defaults for character limits.The 6000/12000 character limits are reasonable for prompt injection without overwhelming token budgets.
| ```json | ||
| { | ||
| "profiles": [ | ||
| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add language identifiers to fenced code blocks.
Two fenced code blocks (lines 320, 338) are missing language identifiers. Add javascript to the code block at line 320 and line 338 for proper syntax highlighting.
🔎 Proposed fix
-```
+```javascript
spawn_worker({ profileId: "ethers-expert" })</details>
Also applies to: 338-338
<details>
<summary>🧰 Tools</summary>
<details>
<summary>🪛 markdownlint-cli2 (0.18.1)</summary>
320-320: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
</details>
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
In docs/examples.md around lines 320 and 338, two fenced code blocks are missing
language identifiers; update each opening triple-backtick to include
"javascript" (i.e. change tojavascript) so both code blocks (at line 320
and line 338) get proper JavaScript syntax highlighting.
</details>
<!-- fingerprinting:phantom:poseidon:puma -->
<!-- This is an auto-generated comment by CodeRabbit -->
| **Solution:** | ||
|
|
||
| 1. Check what models are available: | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Address markdown linting issues.
Multiple markdown linting issues need to be addressed:
-
Missing language identifiers: Many fenced code blocks lack language identifiers (lines 68, 73, 78, 138, 161, 166, 172, 181, 216, 234, 239, 244, 262, 267, 279, 351, 356, 390, 420, 432, 454, 459). Add appropriate language identifiers (
javascript,json,bash,text, etc.) based on content. -
Emphasis instead of heading: Lines 402 and 408 use emphasis (bold) instead of proper markdown headings. Convert to heading syntax (e.g.,
#### Option 1: Environment Variable).
🔎 Example fixes
For tool call examples:
-```
+```javascript
list_models
For emphasis-as-heading:
```diff
-**Option 1: Environment Variable**
+#### Option 1: Environment Variable
Also applies to: 73-73, 78-78, 138-138, 161-161, 166-166, 172-172, 181-181, 216-216, 234-234, 239-239, 244-244, 262-262, 267-267, 279-279, 351-351, 356-356, 390-390, 402-402, 408-408, 420-420, 432-432, 454-454, 459-459
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
68-68: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In docs/troubleshooting.md around lines 68 (and also apply to lines 73, 78, 138,
161, 166, 172, 181, 216, 234, 239, 244, 262, 267, 279, 351, 356, 390, 402, 408,
420, 432, 454, 459), several fenced code blocks are missing language identifiers
and two headings are written as emphasized text; update each fenced block to
include the appropriate language tag (e.g., javascript, json, bash, text) that
matches the block content and convert the emphasized lines at 402 and 408 into
proper Markdown headings (e.g., #### Option 1: Environment Variable) so linting
passes. Ensure consistency across all listed line numbers and run the markdown
linter to confirm fixes.
| // Match "opencode serve" but not tsserver or other node processes that happen to have similar strings | ||
| if (!args.includes("opencode serve --hostname")) continue; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Filter may be too restrictive and miss legitimate processes.
The exact substring match for "opencode serve --hostname" has several potential issues:
-
Documentation mismatch: The comment says it matches "opencode serve" but the code actually requires the
--hostnameflag to be present immediately afterserve. -
Argument order sensitivity: This will miss processes where arguments appear in different orders, such as:
opencode serve --port 8080 --hostname localhostopencode serve --other-flag --hostname localhost
-
Missing --hostname flag: Legitimate
opencode serveprocesses without the--hostnameflag won't be detected.
Consider using a more robust pattern that checks for the presence of both "opencode" and "serve" as separate tokens while still excluding false positives like tsserver.
🔎 Proposed fix with more flexible matching
- // Match "opencode serve" but not tsserver or other node processes that happen to have similar strings
- if (!args.includes("opencode serve --hostname")) continue;
+ // Match "opencode serve" but not tsserver or other node processes
+ // Check for "opencode" followed by "serve" as separate words
+ if (!/(^|\s)opencode\s+serve(\s|$)/.test(args)) continue;
+ // Exclude false positives like "tsserver"
+ if (/tsserver/.test(args)) continue;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Match "opencode serve" but not tsserver or other node processes that happen to have similar strings | |
| if (!args.includes("opencode serve --hostname")) continue; | |
| // Match "opencode serve" but not tsserver or other node processes | |
| // Check for "opencode" followed by "serve" as separate words | |
| if (!/(^|\s)opencode\s+serve(\s|$)/.test(args)) continue; | |
| // Exclude false positives like "tsserver" | |
| if (/tsserver/.test(args)) continue; |
🤖 Prompt for AI Agents
In src/core/process-metrics.ts around lines 54-55, the current exact substring
check for "opencode serve --hostname" is too strict and order-dependent; change
it to parse the process args/tokens and detect "opencode" and "serve" as
separate tokens (allowing flags in any order and optional --hostname) while
still excluding known false positives like tsserver: i.e., split/join the
command into tokens and require tokens.includes("opencode") &&
tokens.includes("serve"), and also add a negative check to skip processes whose
name or tokens indicate "tsserver" (or other explicit false-positive names).
| async function removeSessionEntryByHostPid(_hostPid: number): Promise<void> { | ||
| const { pruneDeadEntries } = await import("./device-registry"); | ||
| await pruneDeadEntries(); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused parameter _hostPid in removeSessionEntryByHostPid.
The function accepts _hostPid but ignores it, delegating entirely to pruneDeadEntries(). This removes all dead entries, not just those for the specific hostPid.
🔎 Either use the parameter or rename the function
If the intent is to prune only sessions for a specific host:
async function removeSessionEntryByHostPid(hostPid: number): Promise<void> {
- const { pruneDeadEntries } = await import("./device-registry");
- await pruneDeadEntries();
+ const { listDeviceRegistry, removeSessionEntry } = await import("./device-registry");
+ const entries = await listDeviceRegistry();
+ for (const e of entries) {
+ if (e.kind === "session" && e.hostPid === hostPid) {
+ await removeSessionEntry(e.id).catch(() => {});
+ }
+ }
}Or if pruneDeadEntries is the correct behavior, rename to clarify:
-async function removeSessionEntryByHostPid(_hostPid: number): Promise<void> {
+async function pruneDeadSessionEntries(): Promise<void> {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async function removeSessionEntryByHostPid(_hostPid: number): Promise<void> { | |
| const { pruneDeadEntries } = await import("./device-registry"); | |
| await pruneDeadEntries(); | |
| } | |
| async function pruneDeadSessionEntries(): Promise<void> { | |
| const { pruneDeadEntries } = await import("./device-registry"); | |
| await pruneDeadEntries(); | |
| } |
🤖 Prompt for AI Agents
In src/core/runtime.ts around lines 243 to 246, removeSessionEntryByHostPid
currently accepts a _hostPid but ignores it and calls pruneDeadEntries(), which
prunes all dead entries; either (preferred) wire the hostPid through to a
device-registry API that prunes entries for that specific host (e.g., call
pruneDeadEntriesForHost(hostPid) or extend pruneDeadEntries to accept a hostPid)
and remove the leading underscore from the parameter, or (alternate) if the
function truly should prune all dead entries, rename it to reflect that behavior
(e.g., pruneDeadSessionEntries) and drop the unused parameter. Ensure the chosen
change updates imports/exports in device-registry and any callers to keep types
and names consistent.
| // Track the current orchestrator session for wakeup injection | ||
| let currentOrchestratorSessionId: string | undefined; | ||
| let lastWakeupInjection = 0; | ||
| const WAKEUP_INJECTION_COOLDOWN_MS = 2000; // Prevent rapid-fire injections | ||
|
|
||
| // Listen for wakeup events from workers and optionally inject prompt to resume orchestrator | ||
| const unsubscribeWakeup = messageBus.onWakeup(async (event) => { | ||
| const reasonLabels: Record<string, { message: string; variant: "success" | "info" | "warning" | "error" }> = { | ||
| result_ready: { message: `Worker "${event.workerId}" completed: ${event.summary ?? "results ready"}`, variant: "success" }, | ||
| needs_attention: { message: `Worker "${event.workerId}" needs attention: ${event.summary ?? "check orchestrator_wakeups"}`, variant: "warning" }, | ||
| error: { message: `Worker "${event.workerId}" error: ${event.summary ?? "check orchestrator_wakeups"}`, variant: "error" }, | ||
| progress: { message: `Worker "${event.workerId}": ${event.summary ?? "progress update"}`, variant: "info" }, | ||
| custom: { message: `Worker "${event.workerId}": ${event.summary ?? "notification"}`, variant: "info" }, | ||
| }; | ||
| const info = reasonLabels[event.reason] ?? { message: `Worker "${event.workerId}" wakeup`, variant: "info" as const }; | ||
| void showToast(info.message, info.variant); | ||
|
|
||
| // Inject a prompt into the orchestrator session to wake it up | ||
| // Only do this for actionable wakeups (result_ready, needs_attention, error) and if we have a session | ||
| const shouldInject = | ||
| config.ui?.wakeupInjection !== false && | ||
| currentOrchestratorSessionId && | ||
| (event.reason === "result_ready" || event.reason === "needs_attention" || event.reason === "error") && | ||
| Date.now() - lastWakeupInjection > WAKEUP_INJECTION_COOLDOWN_MS; | ||
|
|
||
| if (shouldInject) { | ||
| lastWakeupInjection = Date.now(); | ||
| const wakeupMessage = `[WORKER WAKEUP] Worker "${event.workerId}" signaled: ${event.reason}${event.summary ? ` - ${event.summary}` : ""}${event.jobId ? ` (jobId: ${event.jobId})` : ""}. Check orchestrator_wakeups() or await_worker_job() for details.`; | ||
|
|
||
| try { | ||
| await (ctx.client.session as any).promptAsync({ | ||
| sessionID: currentOrchestratorSessionId, | ||
| directory: ctx.directory, | ||
| parts: [{ type: "text", text: wakeupMessage }], | ||
| }); | ||
| } catch { | ||
| // Silently ignore injection failures (session may have ended, etc.) | ||
| } | ||
| } | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use the correct SDK method for session prompting.
The code calls (ctx.client.session as any).promptAsync() with a type assertion, which masks a method naming issue. The OpenCode SDK session client provides a prompt() method, not promptAsync(). Replace the type assertion with the correct method call, or if promptAsync() is intentional, verify it exists in your SDK version and remove the as any cast to maintain type safety.
🤖 Prompt for AI Agents
In src/index.ts around lines 93 to 132, the wakeup injection uses a type
assertion and calls (ctx.client.session as any).promptAsync(...), which hides a
method-name/type mismatch; replace that call with the SDK's correct session
method (call ctx.client.session.prompt(...) without the `as any` cast) and
ensure you await the returned Promise if prompt is async, or adapt to the actual
SDK signature if it returns synchronously; if promptAsync was intentional,
remove the `as any` cast and import/upgrade the SDK so the method exists with
proper typing.
- Add vision-router.ts for automatic image detection and routing to vision worker - Support clipboard, file URLs, data URLs, and base64 image formats - Add global lock to prevent concurrent vision processing - Add detailed toast notifications for debugging vision pipeline - Allow explicit model references from api-source providers in hydration - Pass orchestrator profiles to vision router for correct model selection - Add comprehensive test suite for vision routing (28 tests)
- Fix biome config schema (use object format instead of boolean) - Add language identifier to workflow diagram code block - Fix CODEOWNERS to target file paths, not branches - Add language identifiers to all fenced code blocks in docs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/index.ts (1)
124-128: SDK method usage concern already flagged.The
(ctx.client.session as any).promptAsync()pattern with type assertion was identified in a previous review. The same pattern appears again at line 382-386.
🧹 Nitpick comments (7)
src/models/hydrate.ts (1)
40-48: Unused variable:explicitlyReferencedProvidersis computed but never consumed.The Set is built by iterating over all profiles and collecting provider IDs, but it's not used anywhere in the function. The explicit model resolution on line 103 already uses
providersAlldirectly.Either remove this dead code or complete the intended feature.
🔧 If this is dead code, remove it
- // Collect provider IDs explicitly referenced in profile models (user intent = use them) - const explicitlyReferencedProviders = new Set<string>(); - for (const profile of Object.values(input.profiles)) { - const model = profile.model.trim(); - if (model.includes("/") && !model.startsWith("auto") && !model.startsWith("node")) { - const providerID = model.split("/")[0]; - explicitlyReferencedProviders.add(providerID); - } - } -src/index.ts (1)
368-393: Fire-and-forget pattern with silent error handling.The vision analysis runs as a detached promise. While errors trigger toasts (line 391), the catch block at line 387 silently swallows injection failures without any logging. Consider adding debug-level logging for troubleshooting:
} catch { - // Ignore injection errors + // Injection can fail if session ended - log for debugging + // Could add: logger.debug("Vision injection failed", { sessionID: input.sessionID }); }test/vision-routing.test.ts (1)
131-137: Missing cleanup for temporary directory.The
tempDiris created inbeforeAllbut never cleaned up inafterAll. Consider adding cleanup to avoid accumulating temp files:beforeAll(async () => { tempDir = await mkdtemp(join(tmpdir(), "vision-test-")); }); + + afterAll(async () => { + const { rm } = await import("node:fs/promises"); + await rm(tempDir, { recursive: true, force: true }).catch(() => {}); + });src/ux/vision-router.ts (4)
17-20: The dual-lock mechanism works due to JavaScript's single-threaded event model, but the design could be clearer.The
visionLockSyncflag works because JavaScript is single-threaded—between any twoawaitpoints, no other code can interleave. The current approach rejects concurrent requests rather than queuing them. This is acceptable behavior, but consider documenting this rejection-based behavior more explicitly, or using a simple mutex/semaphore pattern if queuing becomes desirable later.
80-87: Data URL regex only handles base64-encoded images.The regex
/^data:(image\/[^;]+);base64,(.*)$/requires the;base64,portion, so non-base64 data URLs (e.g., URL-encoded data) would be silently skipped. This is likely fine for typical image use cases, but worth noting if broader data URL support is expected later.
250-275: Lock management in finally block works correctly but could be clearer.The lock release sequence (nullify
visionLock, resetvisionLockSync, then callreleaseLock()) is functionally correct. However, any promise consumers awaiting the oldvisionLockvalue will still resolve properly sincereleaseLock()is called. The order could be reordered for clarity: callreleaseLock()before nullifyingvisionLock.🔎 Suggested reordering for clarity
} finally { // Release lock + releaseLock!(); visionLock = null; visionLockSync = false; - releaseLock!(); }
169-182: Consider documenting theoptionsparameter types more explicitly.The
clientandprofilesparameters useanytypes. While this provides flexibility, documenting the expected shapes or using more specific types would improve maintainability and catch misuse at compile time.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
docs/architecture.mddocs/guide.mdsrc/config/profiles.tssrc/index.tssrc/models/hydrate.tssrc/types/index.tssrc/ux/vision-router.tstest/vision-routing.test.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/config/profiles.ts
- src/types/index.ts
🧰 Additional context used
🧬 Code graph analysis (4)
src/ux/vision-router.ts (3)
src/core/registry.ts (1)
registry(214-214)src/config/profiles.ts (1)
builtInProfiles(10-136)src/workers/spawner.ts (1)
sendToWorker(762-882)
test/vision-routing.test.ts (6)
src/ux/vision-router.ts (2)
hasImages(25-35)analyzeImages(169-276)src/workers/prompt.ts (1)
buildPromptParts(25-55)src/models/hydrate.ts (1)
hydrateProfileModelsFromOpencode(20-141)src/core/registry.ts (1)
registry(214-214)src/workers/spawner.ts (2)
stopWorker(729-757)sendToWorker(762-882)src/types/index.ts (1)
WorkerProfile(7-36)
src/models/hydrate.ts (1)
src/models/catalog.ts (1)
flattenProviders(29-55)
src/index.ts (2)
src/core/message-bus.ts (1)
messageBus(130-130)src/ux/vision-router.ts (2)
hasImages(25-35)analyzeImages(169-276)
🔇 Additional comments (18)
src/models/hydrate.ts (1)
35-37: LGTM!The comments clearly document the design intent: auto-selection tags (node:, auto:) use configured providers only, while explicit model references trust the user's choice and resolve against all providers. The implementation correctly reflects this.
Also applies to: 101-110
docs/guide.md (1)
87-87: LGTM!The Vision Analyst model example is updated to match the new default vision model in the profiles configuration.
docs/architecture.md (1)
454-454: LGTM!Architecture documentation updated to reflect the new default vision model, consistent with the profiles and guide documentation.
src/index.ts (3)
100-109: LGTM!The wakeup event handler cleanly maps reason types to appropriate toast messages with sensible fallback handling.
334-366: LGTM!The image filtering logic correctly removes image parts for non-vision agents and appends a helpful hint. This prevents providers from erroring on unexpected image inputs while keeping users informed.
429-429: LGTM!Proper cleanup of the wakeup listener subscription when the server is disposed.
test/vision-routing.test.ts (7)
25-28: LGTM!Minimal 1x1 PNG test fixture is a good choice for testing image handling without bloating test files.
30-129: LGTM!Comprehensive unit tests for
hasImagescovering all detection paths including positive cases, negative cases, and edge cases with malformed input.
240-381: LGTM!Model resolution tests cover important edge cases: explicit API provider models, vision capability validation, and auto-selection provider filtering. The mock client setup is clean.
415-439: Concurrency test assertion is permissive but pragmatic.The test correctly handles timing uncertainty. The permissive assertion at lines 436-438 acknowledges that both calls might complete before the lock check if timing aligns. This is acceptable for testing the lock mechanism exists without flaky failures.
443-468: LGTM!E2E test setup with proper environment restoration and worker cleanup in
afterEach. The empty catch blocks in cleanup code are acceptable since we don't want cleanup failures to fail tests.
474-521: LGTM!E2E test properly manages resources with
try/finallyfor server cleanup and uses an appropriate 120-second timeout for vision model operations.
564-609: LGTM!Config integration tests verify that custom profiles are passed through to
analyzeImagesand that built-in profiles serve as fallback. The error expectations correctly validate the intended behavior paths.src/ux/vision-router.ts (5)
1-15: Imports are well-organized and appropriate for the module's functionality.The imports cover all necessary Node.js modules for file I/O, clipboard access, and internal dependencies for worker management.
25-35: LGTM!The function correctly detects various image formats including explicit image types, file attachments with image MIME types, data URLs, and clipboard placeholders. The defensive null/type checks are appropriate.
161-164: Windows clipboard reading is not implemented.The function returns
undefinedon Windows. If Windows support is needed, consider using PowerShell or a native module to read clipboard images.Is Windows support for clipboard image reading a requirement for this feature? If so, this would need to be addressed before release.
221-243: Worker spawning logic handles edge cases well.The fallback from user-configured profiles to built-in profiles, combined with proper error handling and toast notifications, provides good observability for debugging vision worker issues.
258-261: No issue identified. ThevisionWorker.profile.idparameter is the correct worker identifier forsendToWorker(). Workers are registered in the registry using theirprofile.id, andsendToWorker()retrieves workers viaregistry.getWorker(workerId)where the workerId matches this identifier. This pattern is used consistently throughout the codebase.
src/ux/vision-router.ts
Outdated
| const script = [ | ||
| `set outPath to "${outPath.replace(/"/g, '\\"')}"`, | ||
| `set outFile to POSIX file outPath`, | ||
| `set f to open for access outFile with write permission`, | ||
| `set eof f to 0`, | ||
| `write (the clipboard as «class PNGf») to f`, | ||
| `close access f`, | ||
| `return outPath`, | ||
| ].join("\n"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential AppleScript injection if tmpdir() contains special characters.
The string escaping only handles double quotes. If tmpdir() returns a path with other AppleScript-sensitive characters (e.g., backslashes on edge-case systems), the script could fail or behave unexpectedly. This is low-risk since tmpdir() typically returns safe paths, but consider using a more robust escaping approach or validating the path.
🔎 Suggested fix with additional escaping
const script = [
- `set outPath to "${outPath.replace(/"/g, '\\"')}"`,
+ `set outPath to "${outPath.replace(/\\/g, '\\\\').replace(/"/g, '\\"')}"`,
`set outFile to POSIX file outPath`,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const script = [ | |
| `set outPath to "${outPath.replace(/"/g, '\\"')}"`, | |
| `set outFile to POSIX file outPath`, | |
| `set f to open for access outFile with write permission`, | |
| `set eof f to 0`, | |
| `write (the clipboard as «class PNGf») to f`, | |
| `close access f`, | |
| `return outPath`, | |
| ].join("\n"); | |
| const script = [ | |
| `set outPath to "${outPath.replace(/\\/g, '\\\\').replace(/"/g, '\\"')}"`, | |
| `set outFile to POSIX file outPath`, | |
| `set f to open for access outFile with write permission`, | |
| `set eof f to 0`, | |
| `write (the clipboard as «class PNGf») to f`, | |
| `close access f`, | |
| `return outPath`, | |
| ].join("\n"); |
🤖 Prompt for AI Agents
In src/ux/vision-router.ts around lines 114-122, the AppleScript literal
currently only escapes double quotes which can still break when tmpdir()
contains other special characters; change the script to use AppleScript's quoted
form for the POSIX path and also escape backslashes/newlines before embedding:
build outPathEscaped by replacing backslashes with \\ and newlines with \\n (and
keep replacing " with \\"), then use `set outFile to POSIX file (quoted form of
"<outPathEscaped>")` instead of `set outPath to "..."` so the path is passed
safely into AppleScript.
- Add environment variables for ANTHROPIC, OPENAI, OPENROUTER API keys - Run unit tests always - Run E2E tests only when API keys are available - E2E tests continue-on-error to not block PR merges
- No API keys required - Uses opencode/gpt-5-nano which is free and always available - Removed conditional E2E test execution
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/index.ts (1)
155-163: SDK method usage flagged in previous review.This code uses
(ctx.client.session as any).promptAsync()with a type assertion. As noted in the past review, verify thatpromptAsyncexists in your SDK version or use the correct method name (potentiallyprompt()).
🧹 Nitpick comments (14)
src/core/passthrough.ts (1)
52-64: Sync exit phrases between detection and prompt.The prompt documents exit phrases "exit passthrough", "exit docs mode", or "back", but
isPassthroughExitMessageaccepts additional phrases like "stop docs", "exit docs", "stop passthrough", "disable passthrough", and "exit passthrough mode". Consider either:
- Updating the prompt to mention all valid exit phrases, or
- Removing uncommonly-used phrases from the Set to match the prompt
This prevents user confusion if they say "stop passthrough" (which works) but wasn't documented in the system prompt.
src/core/bridge-server.ts (1)
82-96: Consider gating wakeup emission on actual content presence.The wakeup is emitted whenever
body.jobIdis truthy, regardless of whetherbody.reportorbody.finalcontain meaningful content. This could generate unnecessary wakeup events with a generic "report received" summary when a worker sends an empty or progress-only report.🔎 Proposed refinement
- if (body.jobId) { + if (body.jobId && (body.report || body.final)) { const job = workerJobs.get(body.jobId); messageBus.wakeup({src/core/jobs.ts (1)
34-48: LGTM! The session and requester tracking fields are a good addition.The optional
sessionIdandrequestedByfields enable proper correlation with the wakeup/injection flows introduced elsewhere in this PR. The backward-compatible approach is appropriate.Minor style nit: the conditional spread pattern could be simplified, but the current approach is explicit and works correctly.
🔎 Optional simplification
const job: WorkerJob = { id, workerId: input.workerId, message: input.message, - ...(input.sessionId ? { sessionId: input.sessionId } : {}), - ...(input.requestedBy ? { requestedBy: input.requestedBy } : {}), + sessionId: input.sessionId, + requestedBy: input.requestedBy, status: "running", startedAt: Date.now(), };This includes the keys with
undefinedvalues, which is typically fine for internal objects and slightly more concise. However, if you prefer to omit keys entirely when not provided, the current approach is valid.src/memory/inject.ts (1)
91-99: Consider documenting the filtering behavior.The
gatherfunction filters out both message-like nodes (unless explicitly included) and auto-scaffold nodes (always). While theisAutoScaffoldfunction's logic is clear from the code, a brief inline comment explaining why these are excluded could help future maintainers understand the rationale (e.g., "Exclude auto-generated metadata nodes to focus on user-relevant content").src/memory/auto.ts (2)
125-169: Rolling summaries perform multiple DB round-trips per message.For each message with summaries enabled:
getMemoryByKeyfor project summaryupsertMemoryfor project summarygetMemoryByKeyfor session summaryupsertMemoryfor session summarygetMemoryByKeyfor global summaryupsertMemoryfor global summaryThis is 6 database operations per message in the project scope case. Consider batching or deferring summary updates if message volume is high.
171-213: Trimming on every message write could impact performance at scale.The trimming operations run after every
recordMessageMemorycall. For high-volume scenarios, this could create performance bottlenecks. Consider:
- Batching trim operations (e.g., every N messages)
- Using a background job/worker for trimming
- Adding a skip option to bypass trimming for bulk operations
For the current use case, this is likely acceptable.
src/ux/vision-router.ts (1)
67-81: Redundant dynamic import could be simplified.The
hasImagesis already re-exported at line 23, but line 68 uses a dynamic import. This works but is inconsistent. Consider using the static import for bothhasImagesandextractImagesifextractImagesis also exported from the analyzer.- const { hasImages: checkHasImages, extractImages } = await import("../vision/analyzer"); - if (!checkHasImages(parts)) { + const { extractImages } = await import("../vision/analyzer"); + // hasImages is already imported via re-export + const { hasImages: staticHasImages } = await import("../vision/analyzer"); + if (!staticHasImages(parts)) {Alternatively, add
extractImagesto the static imports at the top.src/index.ts (2)
135-139: Consider adding session existence validation before injection.The wakeup injection checks the cooldown but doesn't verify the target session still exists before attempting to inject. If the session was deleted between resolving the ID and injecting, the silent catch on line 161 handles it, but logging the skip reason would aid debugging.
205-234: Vision message transform exits early without processing all messages.The transform iterates messages in reverse but uses
breakstatements that exit the entire loop after processing (or skipping) the first matching message. If the intent is to process only the most recent user message with images, this is correct. However, consider adding a comment clarifying this is intentional single-message processing.src/vision/analyzer.ts (2)
159-218: Consider adding Windows clipboard support or documenting the limitation.The
readClipboardImagefunction supports macOS (AppleScript) and Linux (Wayland/X11) but silently returnsnullon Windows. Consider either adding Windows support via PowerShell or documenting this limitation.
331-338: Mutation of filtered array element - verify caller expectations.The function modifies
p.textof an element inwithoutImages, which is a filtered copy of the input. Sincefiltercreates a shallow copy, this mutates the original part object inparts. If callers expectpartsto remain unchanged, this could cause issues.Consider cloning the text part before modification:
🔎 Proposed fix
for (let i = withoutImages.length - 1; i >= 0; i--) { const p = withoutImages[i]; if (p?.type === "text" && typeof p.text === "string") { - p.text += `\n\n${analysisText}\n`; + withoutImages[i] = { ...p, text: p.text + `\n\n${analysisText}\n` }; return withoutImages; } }src/core/worker-pool.ts (3)
241-262: Performance concern: Sequential session detail fetches.The code fetches details for up to 50 sessions sequentially using
Promise.allwith individualawaitcalls inside the map. WhilePromise.allruns them concurrently, this could still be slow with many sessions.Consider limiting the search or using a more efficient approach:
🔎 Suggested optimization
- const fullSessions = await Promise.all( - sessions.slice(0, 50).map(async (s) => { + // Limit concurrent fetches to avoid overwhelming the server + const BATCH_SIZE = 10; + const fullSessions: Array<{ id: string; title?: string }> = []; + for (let i = 0; i < Math.min(sessions.length, 50); i += BATCH_SIZE) { + const batch = sessions.slice(i, i + BATCH_SIZE); + const results = await Promise.all( + batch.map(async (s) => { try { const detail = await client.session.get({ path: { id: s.id }, query: { directory: options.directory } }); return detail.data as { id: string; title?: string } | undefined; } catch { return undefined; } - }) - ).then((results) => results.filter((s): s is { id: string; title?: string } => !!s)); + }) + ); + fullSessions.push(...results.filter((s): s is { id: string; title?: string } => !!s)); + // Early exit if we found the worker session + if (fullSessions.some((s) => s.title === workerSessionTitle)) break; + }
471-479: Silently ignoring listener errors may hide bugs.The emit function catches and ignores all listener errors. Consider at least logging these in debug mode to aid troubleshooting:
🔎 Proposed improvement
private emit(event: WorkerPoolEvent, instance: WorkerInstance): void { this.listeners.get(event)?.forEach((cb) => { try { cb(instance); - } catch { - // Ignore listener errors + } catch (err) { + // Log in debug mode to aid troubleshooting + if (process.env.DEBUG) { + console.error(`[WorkerPool] Listener error for ${event}:`, err); + } } }); }
402-441: Device registry read-modify-write lacks atomicity with concurrent orchestrators.Multiple orchestrator instances can race when updating the same entry. Although
writeJsonAtomicensures atomic file replacement, the full cycle (read file → find/update entry → write file) is unprotected. If two orchestrators update concurrently, the second write overwrites the first, losing its update. The code acknowledges this as "best-effort," but file locking (as used inprofile-lock.ts) could prevent data loss if stronger consistency is needed.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (39)
package.jsonprompts/orchestrator.tsschema/orchestrator.schema.jsonsrc/config/orchestrator.tssrc/config/profiles.tssrc/core/bridge-server.tssrc/core/device-registry.tssrc/core/file-monitor.tssrc/core/jobs.tssrc/core/message-bus.tssrc/core/passthrough.tssrc/core/progress.tssrc/core/registry.tssrc/core/worker-pool.tssrc/index.tssrc/memory/auto.tssrc/memory/graph.tssrc/memory/inject.tssrc/memory/text.tssrc/tools/index.tssrc/tools/tools-ux.tssrc/tools/tools-workers.tssrc/types/index.tssrc/ux/vision-router.tssrc/vision/analyzer.tssrc/worker-bridge-plugin.mjssrc/workers/spawner.tstest/fixtures/orchestrator-configs/high-concurrency.jsontest/fixtures/profiles/slow-worker.jsontest/fixtures/profiles/test-worker.jsontest/helpers/fixtures.tstest/memory-auto.test.tstest/performance/memory-text.bench.tstest/unit/core/bridge-server.test.tstest/unit/core/message-bus.test.tstest/unit/core/passthrough.test.tstest/unit/memory/text.test.tstest/unit/vision/analyzer.test.tstest/vision-routing.test.ts
✅ Files skipped from review due to trivial changes (1)
- src/core/file-monitor.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/core/device-registry.ts
🧰 Additional context used
🧬 Code graph analysis (19)
src/core/bridge-server.ts (3)
src/core/jobs.ts (1)
workerJobs(151-151)src/core/message-bus.ts (1)
messageBus(196-196)src/core/registry.ts (1)
registry(10-10)
test/unit/vision/analyzer.test.ts (2)
src/ux/vision-router.ts (4)
hasImages(23-23)analyzeImages(55-148)formatVisionAnalysis(24-24)replaceImagesWithAnalysis(25-25)src/vision/analyzer.ts (5)
hasImages(78-81)extractImages(90-99)analyzeImages(229-297)formatVisionAnalysis(306-316)replaceImagesWithAnalysis(321-351)
src/config/orchestrator.ts (1)
src/helpers/format.ts (1)
isPlainObject(61-61)
src/core/passthrough.ts (1)
src/worker-bridge-plugin.mjs (3)
workerId(6-6)text(34-34)text(57-57)
src/memory/inject.ts (3)
src/memory/text.ts (1)
shortenWithMarker(29-37)src/memory/graph.ts (4)
MemoryNode(7-15)MemoryScope(5-5)getMemoryByKey(114-143)recentMemory(181-210)src/memory/neo4j.ts (2)
Neo4jConfig(3-8)loadNeo4jConfigFromEnv(17-25)
test/memory-auto.test.ts (3)
src/memory/neo4j.ts (3)
Neo4jConfig(3-8)loadNeo4jConfigFromEnv(17-25)withNeo4jSession(47-58)src/memory/auto.ts (1)
recordMessageMemory(33-214)src/memory/inject.ts (1)
buildMemoryInjection(31-122)
src/index.ts (6)
src/core/jobs.ts (1)
workerJobs(151-151)src/core/message-bus.ts (1)
messageBus(196-196)src/vision/analyzer.ts (4)
hasImages(78-81)replaceImagesWithAnalysis(321-351)analyzeImages(229-297)formatVisionAnalysis(306-316)prompts/orchestrator.ts (1)
orchestratorPrompt(8-37)src/core/passthrough.ts (3)
getPassthrough(9-12)buildPassthroughSystemPrompt(52-64)isPassthroughExitMessage(33-50)src/memory/inject.ts (1)
buildMemoryInjection(31-122)
test/unit/core/message-bus.test.ts (2)
src/core/message-bus.ts (1)
MessageBus(25-194)src/worker-bridge-plugin.mjs (1)
after(106-106)
src/memory/auto.ts (3)
src/memory/neo4j.ts (2)
Neo4jConfig(3-8)loadNeo4jConfigFromEnv(17-25)src/memory/graph.ts (5)
upsertMemory(42-78)linkMemory(80-112)getMemoryByKey(114-143)trimMemoryByKeyPrefix(212-274)trimGlobalMessageProjects(276-314)src/memory/text.ts (2)
normalizeForMemory(24-27)appendRollingSummary(39-42)
src/tools/tools-workers.ts (4)
src/workers/spawner.ts (1)
sendToWorker(609-743)src/tools/state.ts (1)
ToolContext(5-9)src/core/jobs.ts (1)
workerJobs(151-151)src/core/message-bus.ts (1)
messageBus(196-196)
src/core/registry.ts (1)
src/core/worker-pool.ts (2)
registry(598-619)workerPool(595-595)
test/vision-routing.test.ts (2)
src/vision/analyzer.ts (3)
formatVisionAnalysis(306-316)replaceImagesWithAnalysis(321-351)hasImages(78-81)src/models/hydrate.ts (1)
hydrateProfileModelsFromOpencode(20-141)
src/core/progress.ts (3)
src/core/logger.ts (1)
logger(49-54)test/helpers/metrics.ts (1)
formatDuration(525-531)src/vision/analyzer.ts (1)
createVisionProgress(357-357)
src/vision/analyzer.ts (2)
src/core/progress.ts (3)
ProgressHandle(30-39)ToastFn(42-42)createVisionProgress(224-267)src/core/logger.ts (1)
logger(49-54)
test/unit/core/passthrough.test.ts (1)
src/core/passthrough.ts (6)
clearAllPassthrough(29-31)getPassthrough(9-12)setPassthrough(14-23)clearPassthrough(25-27)isPassthroughExitMessage(33-50)buildPassthroughSystemPrompt(52-64)
src/memory/graph.ts (1)
src/memory/neo4j.ts (2)
withNeo4jSession(47-58)Neo4jConfig(3-8)
src/core/message-bus.ts (1)
src/types/index.ts (1)
WakeupPayload(290-303)
test/unit/core/bridge-server.test.ts (2)
src/core/bridge-server.ts (1)
startBridgeServer(36-201)src/core/message-bus.ts (1)
messageBus(196-196)
src/ux/vision-router.ts (4)
src/vision/analyzer.ts (3)
VisionResult(29-35)analyzeImages(229-297)createVisionProgress(357-357)src/core/progress.ts (2)
ToastFn(42-42)createVisionProgress(224-267)src/core/registry.ts (1)
registry(10-10)src/workers/spawner.ts (1)
spawnWorker(229-513)
🔇 Additional comments (98)
package.json (1)
26-27: LGTM!The new
benchscript follows a clean pattern for running performance benchmarks. Usingbun testwith glob patterns for*.bench.tsfiles is a sensible approach that separates benchmarks from regular unit tests.src/config/orchestrator.ts (3)
97-97: LGTM!Parsing for
wakeupInjectionfollows the established pattern for boolean UI options.
193-224: LGTM!The expanded memory configuration parsing correctly:
- Validates the parent object with
isPlainObjectbefore accessing nested properties- Type-checks each property before assignment
- Follows the established pattern used elsewhere in this file (e.g.,
notifications.idle,workflows.roocodeBoomerang)
338-360: LGTM!The new defaults for memory configuration are well-structured and align with the schema definitions. The values appear reasonable:
- Summaries enabled by default with 2000 char limits
- Trim limits provide sensible bounds for message retention
- Inject settings favor summaries over raw messages (good for context efficiency)
src/core/passthrough.ts (3)
1-12: LGTM!Clean session state management with proper null/undefined handling. The
getPassthroughfunction correctly returnsundefinedfor falsy session IDs.
14-23:setPassthroughpreservesenabledAton re-entry—verify this is intentional.When updating an existing passthrough session with a different
workerId, the code preserves the originalenabledAttimestamp. This means switching workers mid-session won't reset the "enabled since" time. If you want fresh timestamps when changing workers, consider resettingenabledAtwhenworkerIdchanges:const next: PassthroughState = { workerId, - enabledAt: existing?.enabledAt ?? Date.now(), + enabledAt: existing?.workerId === workerId ? (existing.enabledAt ?? Date.now()) : Date.now(), updatedAt: Date.now(), };
33-50: LGTM!The strict phrase matching via
Setis a good design choice—O(1) lookups and explicit control over exit triggers prevent accidental exits from conversational text containing words like "back".schema/orchestrator.schema.json (2)
21-26: LGTM!The
wakeupInjectionproperty is well-documented with a clear description of its purpose. Defaulttruealigns with the config parser defaults.
132-166: LGTM!The expanded memory schema properly defines all nested configuration blocks (
summaries,trim,inject) with:
- Consistent use of
additionalProperties: falsefor strict validation- Sensible defaults matching the TypeScript implementation
- Clear property names following existing conventions
src/types/index.ts (3)
34-35: LGTM!The
injectRepoContextproperty is a clear opt-in flag for docs workers that need repository context on launch.
116-148: LGTM!The
MemoryConfigtype expansion is comprehensive and well-documented with inline comments explaining each section's purpose. The structure mirrors the schema and config defaults.
289-312: LGTM!The
WakeupPayloadinterface provides a well-structured contract for worker-to-orchestrator wakeup events:
- Clear discriminated union for
reasonenables type-safe handling- Optional
datafield allows extensibility without breaking changes- The
worker:wakeupevent integration follows the existing pattern inOrchestratorEventssrc/core/progress.ts (2)
201-214: LGTM!The
formatDurationhelper provides human-readable output. The singleton export is appropriate for centralized progress management.
224-266: LGTM!The
createVisionProgresshelper provides a clean semantic API for vision-specific progress flows. The decision to create a dedicatedProgressManagerwhen a custom toast is provided ensures isolation and prevents interference with the global singleton.test/fixtures/orchestrator-configs/high-concurrency.json (1)
38-70: LGTM!Consistent model update across all stress worker profiles. The
opencode/gpt-5-nanonaming follows theprovider/modelconvention used elsewhere in the codebase.test/fixtures/profiles/slow-worker.json (1)
4-4: LGTM!Model standardization aligns with other test fixture updates. The slow behavior is correctly defined in
systemPromptrather than relying on the model name.test/unit/core/bridge-server.test.ts (1)
1-61: LGTM! Well-structured tests for bridge server messaging.The tests effectively verify:
- Message routing to orchestrator with jobId preservation
- Wakeup event generation for both messages and reports
- Integration between bridge endpoints and message bus
The setup/teardown is clean, and assertions cover both primary behavior and side effects.
test/helpers/fixtures.ts (1)
78-78: Model update aligns with test fixture standardization.The model change to "opencode/gpt-5-nano" is consistent with the broader test fixture updates across the codebase. Verification of model availability was already requested for the related fixture files.
test/unit/memory/text.test.ts (1)
1-31: LGTM! Comprehensive unit tests for memory text utilities.The tests cover the essential behaviors:
- Code block stripping and secret redaction in normalizeForMemory
- Hard length limits with trim indicators
- Rolling summary boundary enforcement
Test assertions verify both presence/absence of expected content and length constraints.
test/unit/core/passthrough.test.ts (1)
1-37: LGTM! Clean tests for passthrough session management.The tests effectively validate:
- Session CRUD operations with proper state isolation via beforeEach
- Exit command detection with case-insensitive matching
- System prompt generation includes required markers for internal bypass logic
Test structure is clear and focused.
src/config/profiles.ts (3)
39-39: New public API field added to docs profile.The
injectRepoContext: trueflag has been added to the docs worker profile. This is a new public API field that enables automatic repository context injection when the docs worker is spawned.Ensure this behavior is documented and that downstream code properly handles this field.
20-27: System prompt improvements enhance clarity.The expanded bullet-point formatting in system prompts across multiple profiles improves readability and makes worker responsibilities more explicit. These changes maintain the original intent while improving structure.
Also applies to: 44-51, 62-69, 85-92, 108-114, 127-135
15-15: The model identifier "zhipuai-coding-plan/glm-4.6v" is valid and vision-capable.GLM-4.6V is a multimodal vision-language model with native support for images, video, and documents, with a 128K token context window. Ensure your deployment environment has access to this Zhipu AI model.
test/performance/memory-text.bench.ts (1)
1-38: LGTM! Baseline performance benchmarks established.The benchmarks provide performance tracking for memory text utilities:
- normalizeForMemory: 500 iterations with realistic mixed content (code blocks, secrets, repeated text)
- appendRollingSummary: 500 iterations with large string inputs
The warmup iterations and basic validation (opsPerSecond > 0) are appropriate for establishing baseline metrics.
test/unit/core/message-bus.test.ts (4)
5-19: LGTM! Message handling and listener notification verified.The test confirms:
- jobId is preserved on messages
- Listeners receive message notifications
- Messages are stored and retrievable by recipient
- Message ID is properly generated and propagated
21-34: LGTM! Terminal wakeup deduplication works correctly.The test validates that duplicate terminal wakeups (result_ready, error) within the deduplication window:
- Return the same event ID
- Result in only one history entry
- Trigger the listener only once
This prevents redundant orchestrator wakeups for the same job completion.
36-41: LGTM! Progress wakeups bypass deduplication.The test confirms that progress updates are NOT deduplicated, allowing multiple progress notifications for the same job. This is correct behavior for streaming/incremental updates.
43-51: LGTM! Wakeup filtering logic validated.The test verifies hasWakeup correctly filters by:
- jobId (exact match)
- reason (exact match)
- after timestamp (events after threshold)
- Negative cases (non-matching filters return false)
prompts/orchestrator.ts (1)
1-37: LGTM!The orchestrator prompt is well-structured with clear sections for tool usage, delegation strategies, and the vision protocol. The explicit instruction to not use MCP tools directly and the clear vision handling protocol (pending vs completed analyses) aligns well with the wakeup/delegation architecture introduced in this PR.
src/core/bridge-server.ts (3)
101-124: LGTM!The message handling correctly propagates
jobIdthrough the message bus and appropriately wakes the orchestrator when messages are addressed to it. The wakeup payload includes proper correlation data (messageId, sessionId) for traceability.
136-167: LGTM!The wakeup endpoint is well-implemented with proper validation for required fields, worker activity tracking, and returns useful correlation data (id, timestamp) in the response.
169-181: LGTM!The wakeup history endpoint correctly handles query parameter parsing with appropriate defaults and validation. The integration with
messageBus.getWakeupHistory()is clean.test/unit/vision/analyzer.test.ts (1)
1-339: LGTM! Comprehensive test coverage for the vision analyzer module.The test suite provides excellent coverage across all exported functions with well-structured test cases covering:
- Edge cases (null/undefined inputs, empty arrays)
- Various image input formats (base64, data URLs, file parts, clipboard)
- Happy paths and error handling
- End-to-end integration workflows
The mock
sendToVisionWorkerpattern cleanly isolates the analyzer logic from the actual worker implementation.src/worker-bridge-plugin.mjs (3)
10-14: LGTM!The timeout configuration helper is well-implemented with sensible defaults and proper validation.
16-61: LGTM!The timeout implementation using
AbortControlleris correct. Thefinallyblock ensures the timer is always cleaned up, preventing potential resource leaks.
113-142: LGTM!The
wakeup_orchestratortool is well-designed with a descriptive docstring and appropriate arguments. The enum forreasonmatches the types expected by the/v1/wakeupendpoint.src/memory/text.ts (2)
29-37: LGTM! Note on edge case behavior.The shorten algorithm is well-designed. For extremely small
maxCharsvalues (smaller than the marker itself), the output would be just the marker with no actual content. This is likely fine since callers typically useclamp()to enforce minimum bounds (e.g., 200 chars inbuildMemoryInjection).
1-42: LGTM! Well-designed text processing utilities.The module provides a cohesive set of utilities for memory text handling:
- Secret redaction covers common patterns (API keys, tokens, PEM keys)
- Code block stripping keeps summaries concise
- The shorten algorithm intelligently preserves head/tail context
- Rolling summary appends new entries while respecting length limits
The composition in
normalizeForMemoryfollows a logical order (redact → strip → normalize → truncate).test/memory-auto.test.ts (2)
35-110: LGTM! Robust Docker-based test infrastructure.The Neo4j container setup is well-implemented:
- Unique container naming prevents collisions across parallel test runs
- Ephemeral port mapping enables test isolation
- Comprehensive error handling with proper cleanup on all failure paths
- Informative error messages include container logs for debugging
129-265: LGTM! Comprehensive integration test coverage.The test effectively validates the full memory recording flow including:
- Message recording and retrieval
- Project/user node creation
- Session and project trimming enforcement
- Summary generation and bounds checking
- Memory injection output bounds
The use of parameterized Cypher queries in assertions is good practice.
src/memory/inject.ts (1)
31-122: LGTM! Well-structured memory injection builder.The function cleanly assembles memory context with:
- Proper early exits for disabled/unconfigured states
- Sensible defaults and bounds for all configuration options
- Graceful error handling (swallows failures, returns best-effort output)
- Clear markdown structure with appropriate section allocation
The allocation strategy (50% project summary, 35% session summary, remainder for notes) is reasonable for prioritizing high-level context.
src/tools/tools-workers.ts (3)
110-126: LGTM! Worker-to-worker communication support added.The
fromparameter enables tracking message sources in inter-worker communication. The implementation correctly propagates this through tosendToWorker.
156-162: LGTM! Session context now tracked in async jobs.Adding
sessionIdandrequestedByto job creation enables proper session-scoped handling and ownership tracking for the wakeup flow.
173-188: Verify the 250ms delay assumption for fallback wakeups.The fallback wakeup logic correctly handles cases where workers don't call
wakeup_orchestrator. However, the 250ms delay assumes worker-initiated wakeups will arrive within that window. If network latency or worker processing causes delays beyond 250ms, duplicate wakeups could occur (thoughhasWakeupcheck should mitigate this).Consider whether 250ms is sufficient for all expected scenarios, or if this value should be configurable.
src/memory/graph.ts (5)
17-21: LGTM! Centralized projectId validation.The
requireProjectIdhelper provides consistent enforcement of the project scope contract - throwing early ifprojectIdis missing when required.
54-68: LGTM! Dynamic merge pattern for scope-aware upserts.The conditional pattern building correctly handles both global and project scopes, and the spread operator ensures
projectIdis only included in params when relevant.
114-143: LGTM! Key-based memory retrieval added.The
getMemoryByKeyfunction follows the established pattern for scope-aware queries and correctly returnsundefinedwhen no record is found.
212-274: LGTM! Prefix-based trimming with keepLatest support.The implementation correctly handles two cases:
keepLatest=0: Deletes all matching nodeskeepLatest>0: Orders byupdatedAt DESCand deletes older entriesThe use of
DETACH DELETEensures relationships are properly cleaned up.
276-314: No issue here—message keys follow a consistent, deterministic format.The keys with "message:" prefix are always constructed in
src/memory/auto.tsvia therecordMessagefunction, which generates them as eithermessage:{projectId}:{session}:{keyBase}(for global scope) ormessage:{session}:{keyBase}(for project scope). SincetrimGlobalMessageProjectsoperates on global scope, the split onsplit(n.key, ':')[1]reliably extracts the projectId or "unknown" as expected. The format is enforced at the point of creation, so no validation is needed.Likely an incorrect or invalid review comment.
src/memory/auto.ts (3)
29-31: LGTM! Simple clamp utility.Standard numeric clamping function for bounding input values.
48-51: Key format differs between scopes - ensure consistency with trimming logic.For global scope:
message:${projectId}:${session}:${keyBase}
For project scope:message:${session}:${keyBase}This aligns with the
trimGlobalMessageProjectsfunction which usessplit(key, ':')[1]to extract projectId from global keys. The format is consistent.
77-103: LGTM! Global index mirroring for cross-project retrieval.Maintaining lightweight global indexes for users and projects enables cross-project queries. The duplicate upserts (scope-specific and global) ensure data is accessible from both contexts.
test/vision-routing.test.ts (5)
1-28: LGTM! Well-structured test setup.The test file includes appropriate imports, a minimal valid PNG fixture for testing, and clear organization into describe blocks.
77-176: Excellent coverage for hasImages detection.The tests comprehensively cover:
- Different image part types (type='image', type='file' with mime)
- Various MIME types (png, jpeg, webp)
- Data URLs and clipboard URLs
- Negative cases (text-only, non-image files, empty arrays)
- Malformed inputs (null, undefined, non-array)
232-254: Good MIME type inference tests.Testing that file extensions (.jpg, .webp) correctly infer MIME types ensures the prompt building logic handles real-world file attachments properly.
462-499: Concurrent call tests verify serialization behavior.The tests correctly verify that concurrent
analyzeImagescalls are handled gracefully without transient "busy" errors, and that deduplication works as expected with matchingrequestKeyvalues.
517-527: LGTM! Proper test cleanup.The
afterEachhook ensures spawned workers are cleaned up between tests, preventing resource leaks and test interference.src/core/registry.ts (2)
1-10: LGTM! Clean backwards-compatibility layer.The module correctly documents its deprecated status and provides a compatibility shim by re-exporting from the new
worker-poolmodule.
13-30: WorkerRegistry class correctly delegates to workerPool.Using
.bind(workerPool)ensures properthiscontext for all delegated methods. This maintains API compatibility while routing to the new implementation.src/tools/tools-ux.ts (4)
43-50: LGTM! Auto-spawn logic for passthrough mode.The code correctly checks if the worker exists and is not stopped before attempting to spawn. Silently catching spawn errors with
.catch(() => {})is appropriate here since passthrough can still work if the worker already exists.
63-97: LGTM! Generic passthrough mode tool.
setPassthroughModegeneralizesenableDocsPassthroughto work with any worker, with proper validation and error messaging when the profile doesn't exist.
287-356: Comprehensive orchestrator output tool.The
orchestratorOutputtool provides a unified view of:
- Wakeup events
- Inter-agent messages
- Job history
- Internal logs
This is valuable for debugging and monitoring orchestrator activity.
614-671: LGTM! Wakeup history and clear tools.The
orchestratorWakeupstool provides good filtering options (workerId, limit, after, clear) and clear documentation of what wakeup reasons mean. The emoji-based formatting improves readability.src/core/message-bus.ts (5)
14-23: LGTM! Well-defined wakeup types and constants.The types clearly extend
WakeupPayloadwith anid, and the constants provide reasonable defaults for history size (100) and dedupe window (1.5s).
41-53: LGTM! Message listener notification added.The send method now notifies registered message listeners. Error handling correctly swallows listener errors to prevent one bad listener from blocking others.
87-133: Wakeup deduplication logic is sound.The implementation:
- Only deduplicates terminal events (result_ready, error) with a jobId
- Uses a time-based window (1.5s) to coalesce rapid duplicates
- Opportunistically prunes expired entries to prevent memory growth
- Returns the cached event for duplicates, ensuring consistent IDs
This prevents redundant wakeups while allowing progress updates.
144-153: hasWakeup filter logic is correct.The method correctly applies optional filters (workerId, jobId, reason, after) with AND semantics, returning true if any matching event exists.
188-193: Dedupe key excludes progress events intentionally.Only
result_readyanderrorreasons generate dedupe keys, allowing multiple progress updates for the same job. This is the correct behavior for progress tracking.src/ux/vision-router.ts (5)
1-26: LGTM! Clear deprecation and backwards-compat re-exports.The module clearly documents its deprecated status and provides re-exports from the new analyzer module for existing consumers.
83-110: Vision worker selection logic is sound.The code correctly:
- Prefers a ready vision worker
- Falls back to any vision worker
- Spawns a new one if needed and allowed
- Returns a clear error if none available
112-123: LGTM! Worker readiness wait with timeout.The code waits for the worker to become ready with a capped timeout (5 minutes max), providing appropriate error feedback if it times out.
154-176: LGTM! Clean registry-based readiness observer.The
waitForWorkerReadyhelper correctly:
- Checks existing state first
- Sets up an event listener for updates
- Cleans up the listener on completion or timeout
- Uses a
doneflag to prevent race conditions
182-198: Diagnostics returns placeholder values for removed features.The
queueDepthandinFlightRequestsreturn 0 with comments noting they're no longer used. This maintains API compatibility while indicating the values are no longer meaningful.src/tools/index.ts (3)
46-61: LGTM! New tool exports are properly added.The new exports (
clearWakeups,clearPassthroughMode,orchestratorOutput,setPassthroughMode,orchestratorWakeups) are correctly re-exported from./tools-uxand align with the expanded orchestrator capabilities for wakeup and passthrough control.
134-144: LGTM! Core orchestrator tools surface expanded appropriately.The additions to
coreOrchestratorToolsproperly expose the new observability (orchestrator_output,orchestrator_wakeups) and passthrough control tools (set_passthrough,clear_passthrough,enable_docs_passthrough). The grouping with comments is clear.
179-183: LGTM! Plugin tools include wakeup capabilities.The wakeup tools (
orchestrator_wakeups,clear_wakeups) are correctly added topluginToolsfor plugin-level access.src/index.ts (6)
72-76: LGTM! Safe environment variable parsing with fallback.The IIFE correctly parses
OPENCODE_VISION_TIMEOUT_MS, validates it's a positive finite number, and falls back to 300,000ms (5 minutes) if invalid.
104-117: LGTM! Wakeup session tracking is well-structured.The session ID resolution logic (
resolveWakeupTargetSessionId) correctly prioritizes the event's session ID, then falls back to job-based lookup, then to the current orchestrator session.
456-509: LGTM! Async vision analysis with proper job tracking and error handling.The vision analysis flow correctly:
- Creates a job for tracking
- Injects a placeholder immediately (non-blocking UX)
- Runs analysis asynchronously
- Updates job state on success/error
- Emits wakeup events for orchestrator resumption
- Has comprehensive error handling with toast notifications
536-540: LGTM! Memory recording configuration extended.The new
summariesandtrimconfiguration options are correctly passed through torecordMessageMemory.
549-549: LGTM! Proper cleanup of wakeup listener on disposal.The
unsubscribeWakeup()call ensures the message bus listener is cleaned up when the server instance is disposed.
31-31: The import path"../prompts/orchestrator"is correct. Theprompts/orchestrator.tsfile is located at the repository root, and fromsrc/index.ts, using../prompts/orchestratorcorrectly navigates up from thesrc/directory to access it. Changing it to./prompts/orchestratorwould be incorrect, as there is nosrc/prompts/directory.Likely an incorrect or invalid review comment.
src/vision/analyzer.ts (5)
1-11: LGTM! Clear module documentation explaining simplifications.The header comment clearly documents the design decisions: removing internal queue, deduplication, and logging in favor of delegating to worker-pool and progress API. This is a good architectural simplification.
66-81: LGTM! Comprehensive image part detection.The
isImagePartfunction correctly identifies images from multiple formats: directtype: "image", file parts with image MIME types, data URLs, and clipboard references. ThehasImageswrapper safely handles non-array inputs.
90-99: LGTM! Parallel image extraction for performance.Using
Promise.allfor parallel extraction is appropriate here since image reads are I/O-bound and independent.
229-297: LGTM! Well-structured analysis workflow with progress tracking.The
analyzeImagesfunction has:
- Proper timeout configuration with sensible default (5 minutes)
- Progress updates at logical stages
- Clean success/error result formatting
- Duration tracking for observability
306-316: LGTM! Clean result formatting with trimming.The formatter correctly handles success cases with trimming and error cases, returning
undefinedfor empty/invalid results.src/workers/spawner.ts (7)
46-72: LGTM! Plugin path resolution with proper file URL handling.The resolver correctly:
- Checks user config directory first
- Falls back to relative paths from the module
- Uses
pathToFileURLto avoid package specifier misinterpretation by Bun
74-92: LGTM! Worker bridge tool verification with clear error messages.The
checkWorkerBridgeToolsfunction properly verifies required tools (message_tool,worker_inbox,wakeup_orchestrator) are available, with descriptive error extraction from the SDK response.
186-199: LGTM! Robust process group termination for cleanup.The close function correctly:
- Kills the process group on POSIX (
-pid) to terminate children- Falls back to individual process kill on Windows or if group kill fails
- Has a 2-second grace period before SIGKILL
324-334: LGTM! Preflight tool verification with actionable error messages.The error message when bridge tools are missing includes:
- Which tools are missing
- Which tools were loaded
- The plugin path being used
This provides excellent debuggability.
411-463: LGTM! Clear worker prompt with communication protocol documentation.The injected prompt provides workers with:
- Available communication tools and their purposes
- Required behavior for job reporting
- Fallback guidance if tools fail
This is well-structured and actionable for the LLM.
640-664: LGTM! Source context and job instructions in worker messages.The message construction properly:
- Identifies the source (orchestrator or worker) in a parseable XML block
- Adds job-specific instructions when a jobId is present
- Uses sync-mode instructions for non-job requests to prevent unnecessary tool calls
97-104: No action needed. The review comment is based on an incorrect premise:worker-pool.tsdoes not exist in the repository. Additionally, the functions are not duplicates—the version insrc/helpers/process.tshas an extended signature with anoptions?: IsProcessAliveOptionsparameter and includes special handling for EPERM errors, while the version inspawner.tsis a simpler utility used locally within that module.Likely an incorrect or invalid review comment.
src/core/worker-pool.ts (4)
1-14: LGTM! Clear module documentation explaining consolidation.The header effectively communicates that this module consolidates three previous modules (registry, device-registry, profile-lock) and lists the key improvements: single spawn gate, warm worker cache, automatic cleanup, and event-based updates.
157-192: LGTM! Spawn deduplication prevents concurrent duplicate spawns.The
getOrSpawnmethod properly:
- Checks in-memory registry first
- Returns in-flight spawn promise if exists (deduplication)
- Tries device registry reuse
- Creates new spawn with promise tracking
- Cleans up tracking in finally block
595-619: LGTM! Backwards compatibility layer for gradual migration.The
registryexport provides a compatibility shim that delegates toworkerPool, allowing existing code to continue working while consumers migrate to the new API.
554-571: LGTM! Cleanup uses Promise.allSettled for resilience.The
stopAllmethod correctly usesPromise.allSettledto ensure all workers are attempted to be stopped even if some fail. State is properly cleared after cleanup.
| const handle: ProgressHandle = { | ||
| update: (status: string, percent?: number) => { | ||
| this.updateProgress(id, status, percent); | ||
| }, | ||
| complete: (message?: string) => { | ||
| this.completeProgress(id, message); | ||
| }, | ||
| fail: (error: string) => { | ||
| this.failProgress(id, error); | ||
| }, | ||
| get: () => ({ ...progress }), | ||
| }; | ||
|
|
||
| return handle; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get() returns stale snapshot—update to fetch current state.
The get() method captures progress from the closure at creation time and returns a spread of that initial object. However, update, complete, and fail mutate the object stored in this.active, not the closure's progress reference. This causes get() to return outdated data.
🔎 Proposed fix
const handle: ProgressHandle = {
update: (status: string, percent?: number) => {
this.updateProgress(id, status, percent);
},
complete: (message?: string) => {
this.completeProgress(id, message);
},
fail: (error: string) => {
this.failProgress(id, error);
},
- get: () => ({ ...progress }),
+ get: () => {
+ const current = this.active.get(id);
+ return current ? { ...current } : { ...progress };
+ },
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const handle: ProgressHandle = { | |
| update: (status: string, percent?: number) => { | |
| this.updateProgress(id, status, percent); | |
| }, | |
| complete: (message?: string) => { | |
| this.completeProgress(id, message); | |
| }, | |
| fail: (error: string) => { | |
| this.failProgress(id, error); | |
| }, | |
| get: () => ({ ...progress }), | |
| }; | |
| return handle; | |
| const handle: ProgressHandle = { | |
| update: (status: string, percent?: number) => { | |
| this.updateProgress(id, status, percent); | |
| }, | |
| complete: (message?: string) => { | |
| this.completeProgress(id, message); | |
| }, | |
| fail: (error: string) => { | |
| this.failProgress(id, error); | |
| }, | |
| get: () => { | |
| const current = this.active.get(id); | |
| return current ? { ...current } : { ...progress }; | |
| }, | |
| }; | |
| return handle; |
🤖 Prompt for AI Agents
In src/core/progress.ts around lines 99 to 112, the returned handle's get()
currently closes over the initial `progress` snapshot and returns that stale
copy; change get() to read the up-to-date progress from this.active[id] (or
return a safe empty/default if missing) and return a shallow copy of that live
object so updates via update/complete/fail are reflected immediately.
| "id": "test-worker", | ||
| "name": "Test Worker", | ||
| "model": "test-model", | ||
| "model": "opencode/gpt-5-nano", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
opencode/gpt-5-nano model identifier OpenRouter API
💡 Result:
Use the OpenRouter model ID "openai/gpt-5-nano". See OpenRouter docs/examples showing model parameter as "openai/gpt-5-nano". [1][2][3]
Sources:
- OpenRouter SDK callModel example (model parameter example "openai/gpt-5-nano"). [1]
- OpenRouter text-generation example using model: 'openai/gpt-5-nano'. [2]
- OpenRouter models/quickstart pages referencing the OpenAI GPT‑5 Nano listing. [3]
Update model identifier from "opencode/gpt-5-nano" to "openai/gpt-5-nano".
The model identifier is incorrect. OpenRouter documentation confirms the valid model ID is "openai/gpt-5-nano" with the "openai/" prefix, not "opencode/". Using the current identifier will cause API calls to fail with an invalid model error.
🤖 Prompt for AI Agents
In test/fixtures/profiles/test-worker.json around line 4, the model identifier
is incorrect ("opencode/gpt-5-nano"); update that value to "openai/gpt-5-nano"
so API calls use the valid OpenRouter model ID; ensure the JSON string is
properly quoted and the file remains valid JSON after the change.
|
Superseded by PR #5 which merged v0.2.3 directly to main. The release branch is no longer needed. |
Description
Release v0.2.1 preparation branch.
Type of Change
What's Included
This PR serves as the tracking PR for the v0.2.1 release. Additional feature/fix PRs should target this branch.
Infrastructure Added
Release Checklist
Workflow
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation
Chores
Tests
✏️ Tip: You can customize this high-level summary in your review settings.