Skip to content

Conversation

@0xSero
Copy link
Owner

@0xSero 0xSero commented Dec 22, 2025

Description

Release v0.2.1 preparation branch.

Type of Change

  • Release preparation

What's Included

This PR serves as the tracking PR for the v0.2.1 release. Additional feature/fix PRs should target this branch.

Infrastructure Added

  • GitHub Actions CI workflow (build + test)
  • PR template
  • CODEOWNERS file
  • Branch protection documentation

Release Checklist

  • All feature/fix PRs merged to this branch
  • CHANGELOG.md updated with release date
  • Version bump in package.json (if needed)
  • All tests passing
  • Documentation updated

Workflow

feature/* ──┐
            ├──► release/v0.2.1 (this PR) ──► main
fix/*     ──┘

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Worker wakeup system (send, view, clear, history) and orchestrator wakeup tooling.
    • Vision features: automatic image detection, analysis, and injection of results into chats.
    • Passthrough mode to route sessions directly to workers; repo-context injection into worker prompts.
    • Memory auto-injection and progress reporting for long-running tasks; new orchestrator visibility tools.
  • Documentation

    • Added configuration, quickstart, examples, troubleshooting, README updates and changelog.
  • Chores

    • CI workflow, CODEOWNERS, and branch-protection docs added.
  • Tests

    • New vision analyzer and routing test suites (unit, integration, performance).

✏️ Tip: You can customize this high-level summary in your review settings.

0xSero and others added 3 commits December 22, 2025 05:55
- Add CHANGELOG entry for v0.2.1 (unreleased)
- Add GitHub CI workflow (build + test)
- Add PR template
- Add CODEOWNERS file
- Add branch protection documentation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
- Enable automated PR reviews for all pull requests
- Configure path filters to ignore dist/node_modules
- Enable high-level summaries and review status

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@coderabbitai
Copy link

coderabbitai bot commented Dec 22, 2025

Walkthrough

Introduces orchestrator wakeup flow (worker → bridge → message-bus → orchestrator) with endpoints, history and tools; adds vision analyzer and image routing, repo-context utilities, worker-pool/spawner redesign, passthrough and progress systems, memory injection/trim features, schema/config/profile updates, new UX tools, tests, and governance/CI/docs.

Changes

Cohort / File(s) Summary
Repo governance & CI
\.coderabbit.yaml, .github/BRANCH_PROTECTION.md, .github/CODEOWNERS, .github/pull_request_template.md, .github/workflows/ci.yml
New CodeRabbit config, branch protection guidance, CODEOWNERS, PR template, and Bun-based CI workflow (build & test).
Docs & Quickstart
CHANGELOG.md, README.md, docs/* (configuration.md, examples.md, quickstart.md, troubleshooting.md, architecture.md, guide.md)
Large documentation additions and edits (quickstart, config reference, examples, troubleshooting, changelog, architecture/guide model note).
Core wakeup API & bus
src/core/bridge-server.ts, src/core/message-bus.ts, src/types/index.ts
Adds /v1/wakeup POST and /v1/wakeup/history GET; MessageBus wakeup events, listeners, dedupe, capped history, get/clear APIs; new WakeupPayload/Event types and worker:wakeup event.
Orchestrator injection & UX tools
src/index.ts, src/tools/index.ts, src/tools/tools-ux.ts, src/tools/tools-workers.ts
Subscribes to wakeups, conditional wakeup injection into orchestrator sessions, vision placeholders/analysis integration; new UX tools: orchestratorWakeups/orchestratorOutput/clearWakeups and passthrough controls; askWorker tools support from and session/requester tracking.
Vision analyzer & router
src/vision/analyzer.ts, src/ux/vision-router.ts, test/vision-routing.test.ts, test/unit/vision/analyzer.test.ts
New focused vision analyzer: hasImages, extractImages, analyzeImages, format/replace utilities; compatibility wrapper in ux/vision-router with optional worker spawn, diagnostics, and comprehensive tests.
Repo-context & passthrough
src/ux/repo-context.ts, src/core/passthrough.ts
New repo context utilities for bounded markdown summaries; new passthrough subsystem with session state, exit detection, and system prompt builder.
Worker lifecycle & pool
src/core/worker-pool.ts, src/core/registry.ts, src/workers/spawner.ts, src/core/device-registry.ts, src/core/runtime.ts
New WorkerPool centralizing spawn dedupe, persistence, ownership; registry now proxies pool; spawner refocuses on spawn with preflight tool checks, repo-context injection, enhanced prompts; device registry now captures hostPid; runtime adds structured shutdown and orphan pruning.
Bridge plugin & worker bridge
src/worker-bridge-plugin.mjs
Adds timeout-enabled fetch, includes jobId on messages, and exposes new wakeup_orchestrator tool that posts to /v1/wakeup.
Message/job surface & tools
src/core/jobs.ts, src/core/message-bus.ts, src/tools/tools-workers.ts
WorkerJob extended with sessionId/requestedBy; message-bus adds onMessage and message listener APIs; askWorkerAsync schedules fallback wakeups and propagates session/requester.
Memory & injection
schema/orchestrator.schema.json, src/config/orchestrator.ts, src/memory/* (auto.ts, graph.ts, inject.ts, text.ts)
Schema and config parsing extended with memory.summaries/trim/inject and ui.wakeupInjection/notifications.autoInject; memory modules add summaries, trimming, injection builder, graph helpers, and text utilities (redaction/truncation/rolling summaries).
Progress & logging
src/core/progress.ts, src/core/process-metrics.ts
New ProgressManager and vision progress helpers; process listing tightened to match exact "opencode serve --hostname" invocation.
Tools exports & public API
src/tools/index.ts, src/tools/tools-ux.ts
Re-exports new UX tools and passthrough controls; core/plugin tool maps extended with orchestrator_output, orchestrator_wakeups, set_passthrough, clear_passthrough, enable_docs_passthrough.
Vision tests & benchmarks
test/* (vision routing, memory-auto, memory-text bench, unit tests)
Adds multiple unit/integration tests and a performance benchmark for memory text utilities; memory tests now optionally run Neo4j in Docker.
Misc types & profiles
src/config/profiles.ts, src/models/hydrate.ts, prompts/orchestrator.ts, src/types/index.ts
Profile changes (vision model updates, docs.injectRepoContext), explicit provider collection in hydrate, new orchestratorPrompt constant, and added types (WakeupPayload, memory fields, injectRepoContext).
Small/formatting
various files (src/core/file-monitor.ts, package.json, test fixtures, etc.)
Minor formatting, test fixture model updates, new NPM bench script, and small behavioral tweaks across modules.

Sequence Diagram(s)

sequenceDiagram
    actor Worker
    participant Bridge as BridgeServer (/v1/wakeup)
    participant MB as MessageBus
    participant Orch as Orchestrator (Session)
    participant Tools as UI/Toasts

    Worker->>Bridge: POST /v1/wakeup { workerId, reason, jobId?, summary?, data? }
    Bridge->>MB: messageBus.wakeup(payload + timestamp)
    MB->>MB: dedupe/store capped WakeupEvent history
    MB->>Orch: notify onWakeup listeners (async)
    Orch->>Orch: check ui.wakeupInjection, cooldown, session context
    alt injection allowed
        Orch->>Orch: inject resume/wakeup prompt into active session (promptAsync)
        Orch->>Tools: show toast + notification
    else injection skipped
        Orch->>Tools: show toast only
    end
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Poem

🐇 I nudged the sleepy wakeup bell,
A bridge replied, the messages swell.
Vision peeks and repo notes compile,
Workers hum and cleanup smiles a while.
Hooray — the warren wakes in style 🌿

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 20.48% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Release v0.2.1' directly and clearly summarizes the primary change—preparing and releasing version 0.2.1 of the project. It is concise, specific, and reflects the main purpose of the PR.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch release/v0.2.1

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
CHANGELOG.md (1)

8-19: Consider documenting the runtime shutdown fix.

The commit messages mention "fix issue with instance tracking causing sessions not to be terminated with orchestrator." This behavioral change (graceful shutdown orchestration with timeout handling) should be captured under the ### Fixed section before release.

🔎 Suggested entry
 ### Fixed
-- _Document bug fixes here_
+- Fixed instance tracking issue causing sessions not to terminate properly with orchestrator shutdown
.github/workflows/ci.yml (2)

15-17: Consider pinning the Bun version for reproducible CI.

Using bun-version: latest may cause unexpected CI failures if Bun releases a breaking change. Consider pinning to a specific version for stability.

🔎 Suggested change
       - uses: oven-sh/setup-bun@v2
         with:
-          bun-version: latest
+          bun-version: "1.1.38"  # or your current version

Also applies to: 33-35


9-41: Optional: Reduce duplication by running build and test in a single job.

Both jobs independently checkout and install dependencies. For a smaller project, combining them into one job reduces CI time (no duplicate install) while still providing clear step-level visibility.

This is purely optional—separate jobs provide better parallelism and clearer failure isolation.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4650e52 and d73b328.

📒 Files selected for processing (7)
  • .coderabbit.yaml
  • .github/BRANCH_PROTECTION.md
  • .github/CODEOWNERS
  • .github/pull_request_template.md
  • .github/workflows/ci.yml
  • CHANGELOG.md
  • src/core/runtime.ts
🧰 Additional context used
🧬 Code graph analysis (1)
src/core/runtime.ts (1)
src/core/registry.ts (1)
  • registry (214-214)
🪛 LanguageTool
.github/pull_request_template.md

[style] ~7-~7: Consider using a different verb for a more formal wording.
Context: ...- [ ] Bug fix (non-breaking change that fixes an issue) - [ ] New feature (non-breaki...

(FIX_RESOLVE)

🪛 markdownlint-cli2 (0.18.1)
.github/BRANCH_PROTECTION.md

79-79: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🔇 Additional comments (4)
.github/pull_request_template.md (1)

1-26: LGTM!

Well-structured PR template covering essential sections. The checklist appropriately references the project's tooling (bun test).

.github/BRANCH_PROTECTION.md (1)

1-111: LGTM overall!

Comprehensive branch protection documentation with clear setup instructions, CLI commands for automation, and alignment with the CI workflow's build and test job names.

src/core/runtime.ts (2)

18-47: LGTM - Well-designed graceful shutdown with timeout fallback.

The shutdown orchestration properly:

  • Uses promise deduplication to ensure idempotent shutdown
  • Races graceful shutdown against a 6-second timeout
  • Falls back to SIGKILL for unresponsive workers
  • Closes the bridge server

One minor note: after the timeout, SIGKILL is sent to all captured workers regardless of whether they already shut down gracefully. This is harmless (the error is caught) but slightly wasteful. Consider tracking which workers completed shutdown if this becomes a performance concern.


64-78: LGTM - Signal handling is correct.

The signal handlers properly:

  • Use correct exit codes (130 for SIGINT, 143 for SIGTERM per POSIX conventions)
  • Prevent double-handling via shutdownRequested flag
  • Ensure cleanup runs before exit via async IIFE
  • Handle beforeExit for normal process termination

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (8)
docs/configuration.md (1)

41-41: Minor: Hyphenate compound adjective.

The heading "Full Featured Config" should be "Full-Featured Config" when the compound adjective precedes the noun.

Suggested fix
-### Full Featured Config
+### Full-Featured Config
src/index.ts (1)

318-322: Consider session selection strategy for wakeup injection.

The current implementation always updates currentOrchestratorSessionId to the most recent orchestrator agent message, meaning wakeup injections will target the latest active orchestrator session. This could lead to injections into unintended sessions if a user has multiple orchestrator sessions or switches contexts quickly.

Consider if the current "last-wins" approach is intended, or if a more sophisticated session selection strategy (e.g., matching jobId to originating session) would be more appropriate.

src/core/runtime.ts (2)

19-50: Graceful shutdown with timeout looks solid.

The race pattern between shutdownAllWorkers() and the timeout is appropriate. A few observations:

  1. Line 23: The snapshot of workers is taken before shutdownAllWorkers(), but workers could theoretically be added during shutdown. This is likely fine since shutdown is typically terminal.

  2. Lines 30-40: The force-kill loop iterates over the original workers snapshot but checks worker.shutdown to identify workers that have a shutdown method. The condition worker.shutdown && typeof worker.pid === "number" seems inverted for the intent—workers with a shutdown method are being force-killed, but those are the ones that should have been gracefully shut down. Consider checking for workers that are still alive instead.

🔎 Consider checking if worker process is still alive before force-killing
       if (!finished) {
         for (const worker of workers) {
-          if (worker.shutdown && typeof worker.pid === "number") {
+          if (typeof worker.pid === "number") {
             try {
               // Try process-group kill first (covers grand-children if worker was spawned detached).
               if (process.platform !== "win32") process.kill(-worker.pid, "SIGKILL");
               else process.kill(worker.pid, "SIGKILL");
             } catch {
               // ignore
             }
           }
         }
       }

177-229: cleanupOrphanedServeProcesses performs system-wide process scanning—verify safety.

The ps aux parsing approach is fragile:

  1. Line 192: The pattern "opencode serve --hostname" might not match if arguments are reordered or have different formatting.

  2. Line 197: Assumes ps aux output format where PID is in column index 1. This may vary across Unix variants (e.g., BSD vs GNU).

  3. Lines 206-224: The logic skips processes with live parents, but a process could be adopted by init (PID 1) if its parent died—the current check at line 215 (ppid > 1) handles this correctly.

Consider using pgrep for more reliable process matching, or document the expected ps output format:

pgrep -f "opencode serve --hostname"
src/core/message-bus.ts (1)

99-112: Non-null assertion in getWakeupHistory is safe but could be cleaner.

Line 107: options.after! uses a non-null assertion inside a block guarded by if (options?.after). This is safe since the condition guarantees after is truthy, but TypeScript's narrowing doesn't propagate into the callback.

🔎 Extract to a const for cleaner code
     if (options?.after) {
-      events = events.filter((e) => e.timestamp > options.after!);
+      const afterTs = options.after;
+      events = events.filter((e) => e.timestamp > afterTs);
     }
src/workers/spawner.ts (2)

73-89: checkWorkerBridgeTools has fragile SDK error handling.

Lines 78-85: The error extraction chain (sdkError?.data?.message ?? sdkError?.message ?? ...) is defensive but verbose. The SDK's error shape should ideally be typed.

Line 77: The cast as any on the query parameter suggests a type mismatch with the SDK. This could hide legitimate type errors.

Consider creating a typed error extraction helper if this pattern is used elsewhere in the codebase.


793-803: Task text assembly injects source context and job instructions.

The <message-source> and <orchestrator-job> tags provide structured metadata for the worker. The from field defaults to "orchestrator" which is appropriate.

One consideration: Line 802 appends the job instructions after the message, which could be missed if the message is very long. Consider prepending critical instructions.

src/ux/repo-context.ts (1)

42-77: Git info gathering uses synchronous execSync which blocks the event loop.

For a utility called during worker spawning, blocking calls could impact responsiveness. However, since this is called with .catch(() => undefined) in the spawner, failures are handled gracefully.

Consider adding timeouts to prevent hanging on slow git operations:

🔎 Add timeout to execSync calls
     const branch = execSync("git rev-parse --abbrev-ref HEAD", {
       cwd: directory,
       encoding: "utf8",
       stdio: ["pipe", "pipe", "pipe"],
+      timeout: 5000,
     }).trim();
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d73b328 and cd66710.

📒 Files selected for processing (21)
  • README.md
  • docs/configuration.md
  • docs/examples.md
  • docs/quickstart.md
  • docs/troubleshooting.md
  • schema/orchestrator.schema.json
  • src/config/orchestrator.ts
  • src/config/profiles.ts
  • src/core/bridge-server.ts
  • src/core/device-registry.ts
  • src/core/message-bus.ts
  • src/core/process-metrics.ts
  • src/core/runtime.ts
  • src/index.ts
  • src/tools/index.ts
  • src/tools/tools-ux.ts
  • src/tools/tools-workers.ts
  • src/types/index.ts
  • src/ux/repo-context.ts
  • src/worker-bridge-plugin.mjs
  • src/workers/spawner.ts
🧰 Additional context used
🧬 Code graph analysis (7)
src/tools/tools-workers.ts (1)
src/workers/spawner.ts (1)
  • sendToWorker (762-882)
src/index.ts (1)
src/core/message-bus.ts (1)
  • messageBus (130-130)
src/tools/tools-ux.ts (4)
src/tools/index.ts (2)
  • orchestratorWakeups (57-57)
  • clearWakeups (46-46)
src/core/message-bus.ts (1)
  • messageBus (130-130)
src/tools/state.ts (1)
  • getDefaultListFormat (45-47)
src/tools/markdown.ts (1)
  • renderMarkdownTable (5-13)
src/tools/index.ts (1)
src/tools/tools-ux.ts (2)
  • orchestratorWakeups (488-545)
  • clearWakeups (547-558)
src/core/message-bus.ts (1)
src/types/index.ts (1)
  • WakeupPayload (264-277)
src/core/bridge-server.ts (3)
src/worker-bridge-plugin.mjs (6)
  • url (4-4)
  • res (13-17)
  • res (28-31)
  • res (84-84)
  • res (108-114)
  • workerId (6-6)
src/core/registry.ts (1)
  • registry (214-214)
src/core/message-bus.ts (1)
  • messageBus (130-130)
src/workers/spawner.ts (3)
src/helpers/format.ts (1)
  • getUserConfigDir (61-61)
src/core/device-registry.ts (1)
  • removeWorkerEntriesByPid (99-104)
src/ux/repo-context.ts (1)
  • getRepoContextForWorker (256-266)
🪛 LanguageTool
docs/configuration.md

[uncategorized] ~41-~41: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ..."workers": ["docs", "coder"] } ### Full Featured Config json { "$schema": "../node...

(EN_COMPOUND_ADJECTIVE_INTERNAL)

docs/troubleshooting.md

[style] ~398-~398: Consider using a more formal alternative.
Context: ... ``` --- ## Debug Mode When you need more information to diagnose an issue: ### Enable Debug...

(MORE_INFO)

🪛 markdownlint-cli2 (0.18.1)
docs/quickstart.md

77-77: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


83-83: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


101-101: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


107-107: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


120-120: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


141-141: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


168-168: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


195-195: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


226-226: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


235-235: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

docs/examples.md

320-320: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


338-338: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

docs/troubleshooting.md

68-68: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


73-73: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


78-78: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


138-138: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


161-161: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


166-166: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


172-172: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


181-181: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


216-216: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


234-234: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


239-239: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


244-244: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


262-262: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


267-267: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


279-279: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


351-351: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


356-356: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


390-390: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


402-402: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


408-408: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


420-420: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


432-432: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


454-454: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


459-459: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🔇 Additional comments (25)
README.md (1)

48-89: Excellent onboarding improvements!

The new "Why Multiple AI Workers?" and "Prerequisites" sections significantly improve the first-time user experience. The quick verification commands and provider configuration example are particularly helpful.

src/config/orchestrator.ts (1)

97-97: LGTM: Follows established pattern.

The wakeupInjection config parsing follows the same pattern as other boolean UI options (toasts, debug, logToConsole, etc.) and is type-safe.

src/config/profiles.ts (1)

39-39: LGTM: Repository context for docs worker.

Adding injectRepoContext: true for the docs worker makes sense, as having repository context helps the documentation librarian provide more relevant research and examples.

src/core/device-registry.ts (1)

11-12: LGTM: Well-documented field addition.

The hostPid field is clearly documented and supports the orphan cleanup workflow. The optional nature makes this a backward-compatible change.

schema/orchestrator.schema.json (1)

22-26: LGTM: Clear schema addition.

The wakeupInjection property is well-documented with a clear description of its purpose. The default value of true enables the feature by default, which seems appropriate for the async worker wakeup functionality.

src/worker-bridge-plugin.mjs (1)

89-118: LGTM: Well-designed wakeup tool.

The wakeup_orchestrator tool is well-implemented with:

  • Clear documentation of when to use it (result_ready, needs_attention, error, progress)
  • Strongly-typed reason enum preventing arbitrary values
  • Optional fields for flexibility (jobId, summary, data)
  • Consistent error handling and response format

The implementation follows the established pattern used by messageTool and inboxTool.

src/core/bridge-server.ts (2)

103-134: LGTM: Well-structured wakeup endpoint.

The /v1/wakeup endpoint implementation is solid:

  • Validates required fields (workerId, reason)
  • Updates worker lastActivity timestamp
  • Emits event through messageBus for proper decoupling
  • Returns consistent response format matching other endpoints
  • Follows established patterns from /v1/message and /v1/report

136-148: LGTM: Wakeup history endpoint mirrors inbox pattern.

The /v1/wakeup/history endpoint correctly:

  • Accepts optional filtering parameters (workerId, after, limit)
  • Uses Number.isFinite checks for safe numeric parsing
  • Follows the same pattern as the /v1/inbox endpoint
  • Returns consistent JSON structure
src/tools/tools-workers.ts (1)

109-109: LGTM: Worker-to-worker communication support added.

The optional from parameter has been properly added to both askWorker and askWorkerAsync tools and correctly propagated to sendToWorker. This enables source identification for worker-to-worker communication flows, which aligns with the wakeup orchestration feature introduced in this release.

Also applies to: 121-125, 153-153, 163-163

src/tools/tools-ux.ts (1)

488-558: LGTM: Well-designed wakeup management tools.

The new orchestratorWakeups and clearWakeups tools provide a clean interface for viewing and managing worker wakeup events. The implementation includes proper filtering, formatting options, emoji indicators for different wakeup reasons, and sensible defaults. The integration with messageBus is consistent with the related changes across the codebase.

src/types/index.ts (1)

34-35: LGTM: Well-defined type extensions for wakeup feature.

The new type definitions properly support the wakeup orchestration feature:

  • WakeupPayload interface provides a clear contract for wakeup events with appropriate required/optional fields
  • injectRepoContext flag enables selective repository context injection for workers
  • wakeupInjection UI configuration with sensible default (true) and helpful documentation
  • worker:wakeup event properly typed in the events union

The types are well-documented and consistent with the existing codebase patterns.

Also applies to: 165-171, 263-277, 286-286

src/tools/index.ts (1)

46-46: LGTM: Wakeup tools properly exported and integrated.

The new wakeup tools (orchestratorWakeups, clearWakeups) are correctly exported, imported, and integrated into both the core and plugin tool sets. The naming conventions and organization are consistent with existing patterns.

Also applies to: 57-57, 89-89, 100-100, 130-130, 168-170

src/index.ts (1)

93-96: LGTM: Wakeup subscription with proper cleanup.

The wakeup subscription is properly initialized with a cooldown mechanism to prevent spam, and the cleanup is correctly registered in the server disposal handler. The cooldown and conditional injection logic provide good UX by preventing notification fatigue while ensuring important wakeups reach the orchestrator.

Also applies to: 357-357

src/core/runtime.ts (2)

71-107: Signal handling uses prependListener appropriately to ensure cleanup runs first.

However, there's a potential issue with the async signal handler:

  • Lines 76-79: The handleSignal function spawns an async IIFE but doesn't prevent the process from continuing. If another signal arrives before runShutdown completes, shutdownRequested will be true and process.exit(code) will be called immediately at line 73, potentially before the first shutdown completes.

  • Lines 84-102: The synchronous exit handler attempts both SIGTERM and SIGKILL in sequence, but SIGKILL immediately after SIGTERM may not give processes time to handle SIGTERM gracefully.

Consider whether the double signal (SIGTERM then immediate SIGKILL) in the exit handler is intentional—typically you'd want a delay between them.


113-160: Orphaned worker cleanup logic is comprehensive.

The dual-path approach (hostPid-based for new entries, staleness-based for legacy entries) handles migration well.

One edge case: Line 141 uses Math.max(...group.map(...)) which returns -Infinity if all entries have undefined updatedAt. Since -Infinity would make now - newestUpdate a large positive number, those entries would be cleaned up, which is likely the intended behavior.

src/core/message-bus.ts (1)

56-85: Wakeup event emission with fire-and-forget async listeners is well-designed.

The implementation correctly:

  • Generates unique IDs with randomUUID()
  • Bounds history with MAX_WAKEUP_HISTORY
  • Handles both sync and async listeners
  • Swallows errors to prevent one listener from affecting others
src/workers/spawner.ts (5)

43-69: Plugin resolution prioritizes user config directory appropriately.

The use of pathToFileURL for both user config and bundled paths ensures consistent handling. The fallback chain (user config → dist location → src location) is reasonable.


146-151: Tool check on reuse path correctly disposes workers with missing tools.

The cleanup sequence (dispose → remove registry entries → return undefined) ensures stale workers don't persist. Good defensive coding.


227-235: Detached process spawning on non-Windows enables process group management.

Setting detached: true allows the orchestrator to kill the entire process tree using negative PID. This is correctly paired with the enhanced close() function.


289-323: Enhanced process termination handles detached workers well.

The close function:

  1. Tries process-group SIGTERM first (lines 293-294)
  2. Falls back to direct kill on error (lines 299-300)
  3. Escalates to SIGKILL after 2s timeout (lines 306-312)

This is a solid graceful shutdown pattern.


562-565: Capabilities JSON is well-formed but consider escaping.

The JSON.stringify output is embedded directly in the prompt text. While boolean values are safe, if resolvedProfile gains string properties in the future, special characters could cause issues.

src/ux/repo-context.ts (4)

79-126: Directory structure enumeration is well-designed with priority sorting.

The priority list covers common project layouts. The exclusions (node_modules, dist, build, hidden files except .github) are appropriate.

One edge case: Line 109 checks result.length >= maxItems but the loop continues to the next iteration rather than breaking immediately after this check. This is correct behavior since break follows on the next line.


143-154: Package.json parsing handles errors gracefully.

The try-catch around JSON parsing prevents malformed package.json from breaking the context gathering. Silently ignoring errors is appropriate here since this is supplementary context.


207-219: Package.json summary truncates lists to 10 items.

The slicing with ellipsis suffix (scripts.length > 10 ? "..." : "") provides clear indication of truncation. This keeps the context manageable.


256-266: getRepoContextForWorker provides sensible defaults for character limits.

The 6000/12000 character limits are reasonable for prompt injection without overwhelming token budgets.

```json
{
"profiles": [
{
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add language identifiers to fenced code blocks.

Two fenced code blocks (lines 320, 338) are missing language identifiers. Add javascript to the code block at line 320 and line 338 for proper syntax highlighting.

🔎 Proposed fix
-```
+```javascript
 spawn_worker({ profileId: "ethers-expert" })
</details>


Also applies to: 338-338

<details>
<summary>🧰 Tools</summary>

<details>
<summary>🪛 markdownlint-cli2 (0.18.1)</summary>

320-320: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</details>

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

In docs/examples.md around lines 320 and 338, two fenced code blocks are missing
language identifiers; update each opening triple-backtick to include
"javascript" (i.e. change tojavascript) so both code blocks (at line 320
and line 338) get proper JavaScript syntax highlighting.


</details>

<!-- fingerprinting:phantom:poseidon:puma -->

<!-- This is an auto-generated comment by CodeRabbit -->

**Solution:**

1. Check what models are available:
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Address markdown linting issues.

Multiple markdown linting issues need to be addressed:

  1. Missing language identifiers: Many fenced code blocks lack language identifiers (lines 68, 73, 78, 138, 161, 166, 172, 181, 216, 234, 239, 244, 262, 267, 279, 351, 356, 390, 420, 432, 454, 459). Add appropriate language identifiers (javascript, json, bash, text, etc.) based on content.

  2. Emphasis instead of heading: Lines 402 and 408 use emphasis (bold) instead of proper markdown headings. Convert to heading syntax (e.g., #### Option 1: Environment Variable).

🔎 Example fixes

For tool call examples:

-```
+```javascript
 list_models

For emphasis-as-heading:
```diff
-**Option 1: Environment Variable**
+#### Option 1: Environment Variable

Also applies to: 73-73, 78-78, 138-138, 161-161, 166-166, 172-172, 181-181, 216-216, 234-234, 239-239, 244-244, 262-262, 267-267, 279-279, 351-351, 356-356, 390-390, 402-402, 408-408, 420-420, 432-432, 454-454, 459-459

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

68-68: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In docs/troubleshooting.md around lines 68 (and also apply to lines 73, 78, 138,
161, 166, 172, 181, 216, 234, 239, 244, 262, 267, 279, 351, 356, 390, 402, 408,
420, 432, 454, 459), several fenced code blocks are missing language identifiers
and two headings are written as emphasized text; update each fenced block to
include the appropriate language tag (e.g., javascript, json, bash, text) that
matches the block content and convert the emphasized lines at 402 and 408 into
proper Markdown headings (e.g., #### Option 1: Environment Variable) so linting
passes. Ensure consistency across all listed line numbers and run the markdown
linter to confirm fixes.

Comment on lines +54 to +55
// Match "opencode serve" but not tsserver or other node processes that happen to have similar strings
if (!args.includes("opencode serve --hostname")) continue;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Filter may be too restrictive and miss legitimate processes.

The exact substring match for "opencode serve --hostname" has several potential issues:

  1. Documentation mismatch: The comment says it matches "opencode serve" but the code actually requires the --hostname flag to be present immediately after serve.

  2. Argument order sensitivity: This will miss processes where arguments appear in different orders, such as:

    • opencode serve --port 8080 --hostname localhost
    • opencode serve --other-flag --hostname localhost
  3. Missing --hostname flag: Legitimate opencode serve processes without the --hostname flag won't be detected.

Consider using a more robust pattern that checks for the presence of both "opencode" and "serve" as separate tokens while still excluding false positives like tsserver.

🔎 Proposed fix with more flexible matching
-    // Match "opencode serve" but not tsserver or other node processes that happen to have similar strings
-    if (!args.includes("opencode serve --hostname")) continue;
+    // Match "opencode serve" but not tsserver or other node processes
+    // Check for "opencode" followed by "serve" as separate words
+    if (!/(^|\s)opencode\s+serve(\s|$)/.test(args)) continue;
+    // Exclude false positives like "tsserver"
+    if (/tsserver/.test(args)) continue;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Match "opencode serve" but not tsserver or other node processes that happen to have similar strings
if (!args.includes("opencode serve --hostname")) continue;
// Match "opencode serve" but not tsserver or other node processes
// Check for "opencode" followed by "serve" as separate words
if (!/(^|\s)opencode\s+serve(\s|$)/.test(args)) continue;
// Exclude false positives like "tsserver"
if (/tsserver/.test(args)) continue;
🤖 Prompt for AI Agents
In src/core/process-metrics.ts around lines 54-55, the current exact substring
check for "opencode serve --hostname" is too strict and order-dependent; change
it to parse the process args/tokens and detect "opencode" and "serve" as
separate tokens (allowing flags in any order and optional --hostname) while
still excluding known false positives like tsserver: i.e., split/join the
command into tokens and require tokens.includes("opencode") &&
tokens.includes("serve"), and also add a negative check to skip processes whose
name or tokens indicate "tsserver" (or other explicit false-positive names).

Comment on lines +243 to +246
async function removeSessionEntryByHostPid(_hostPid: number): Promise<void> {
const { pruneDeadEntries } = await import("./device-registry");
await pruneDeadEntries();
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unused parameter _hostPid in removeSessionEntryByHostPid.

The function accepts _hostPid but ignores it, delegating entirely to pruneDeadEntries(). This removes all dead entries, not just those for the specific hostPid.

🔎 Either use the parameter or rename the function

If the intent is to prune only sessions for a specific host:

 async function removeSessionEntryByHostPid(hostPid: number): Promise<void> {
-  const { pruneDeadEntries } = await import("./device-registry");
-  await pruneDeadEntries();
+  const { listDeviceRegistry, removeSessionEntry } = await import("./device-registry");
+  const entries = await listDeviceRegistry();
+  for (const e of entries) {
+    if (e.kind === "session" && e.hostPid === hostPid) {
+      await removeSessionEntry(e.id).catch(() => {});
+    }
+  }
 }

Or if pruneDeadEntries is the correct behavior, rename to clarify:

-async function removeSessionEntryByHostPid(_hostPid: number): Promise<void> {
+async function pruneDeadSessionEntries(): Promise<void> {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async function removeSessionEntryByHostPid(_hostPid: number): Promise<void> {
const { pruneDeadEntries } = await import("./device-registry");
await pruneDeadEntries();
}
async function pruneDeadSessionEntries(): Promise<void> {
const { pruneDeadEntries } = await import("./device-registry");
await pruneDeadEntries();
}
🤖 Prompt for AI Agents
In src/core/runtime.ts around lines 243 to 246, removeSessionEntryByHostPid
currently accepts a _hostPid but ignores it and calls pruneDeadEntries(), which
prunes all dead entries; either (preferred) wire the hostPid through to a
device-registry API that prunes entries for that specific host (e.g., call
pruneDeadEntriesForHost(hostPid) or extend pruneDeadEntries to accept a hostPid)
and remove the leading underscore from the parameter, or (alternate) if the
function truly should prune all dead entries, rename it to reflect that behavior
(e.g., pruneDeadSessionEntries) and drop the unused parameter. Ensure the chosen
change updates imports/exports in device-registry and any callers to keep types
and names consistent.

Comment on lines 93 to 132
// Track the current orchestrator session for wakeup injection
let currentOrchestratorSessionId: string | undefined;
let lastWakeupInjection = 0;
const WAKEUP_INJECTION_COOLDOWN_MS = 2000; // Prevent rapid-fire injections

// Listen for wakeup events from workers and optionally inject prompt to resume orchestrator
const unsubscribeWakeup = messageBus.onWakeup(async (event) => {
const reasonLabels: Record<string, { message: string; variant: "success" | "info" | "warning" | "error" }> = {
result_ready: { message: `Worker "${event.workerId}" completed: ${event.summary ?? "results ready"}`, variant: "success" },
needs_attention: { message: `Worker "${event.workerId}" needs attention: ${event.summary ?? "check orchestrator_wakeups"}`, variant: "warning" },
error: { message: `Worker "${event.workerId}" error: ${event.summary ?? "check orchestrator_wakeups"}`, variant: "error" },
progress: { message: `Worker "${event.workerId}": ${event.summary ?? "progress update"}`, variant: "info" },
custom: { message: `Worker "${event.workerId}": ${event.summary ?? "notification"}`, variant: "info" },
};
const info = reasonLabels[event.reason] ?? { message: `Worker "${event.workerId}" wakeup`, variant: "info" as const };
void showToast(info.message, info.variant);

// Inject a prompt into the orchestrator session to wake it up
// Only do this for actionable wakeups (result_ready, needs_attention, error) and if we have a session
const shouldInject =
config.ui?.wakeupInjection !== false &&
currentOrchestratorSessionId &&
(event.reason === "result_ready" || event.reason === "needs_attention" || event.reason === "error") &&
Date.now() - lastWakeupInjection > WAKEUP_INJECTION_COOLDOWN_MS;

if (shouldInject) {
lastWakeupInjection = Date.now();
const wakeupMessage = `[WORKER WAKEUP] Worker "${event.workerId}" signaled: ${event.reason}${event.summary ? ` - ${event.summary}` : ""}${event.jobId ? ` (jobId: ${event.jobId})` : ""}. Check orchestrator_wakeups() or await_worker_job() for details.`;

try {
await (ctx.client.session as any).promptAsync({
sessionID: currentOrchestratorSessionId,
directory: ctx.directory,
parts: [{ type: "text", text: wakeupMessage }],
});
} catch {
// Silently ignore injection failures (session may have ended, etc.)
}
}
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Use the correct SDK method for session prompting.

The code calls (ctx.client.session as any).promptAsync() with a type assertion, which masks a method naming issue. The OpenCode SDK session client provides a prompt() method, not promptAsync(). Replace the type assertion with the correct method call, or if promptAsync() is intentional, verify it exists in your SDK version and remove the as any cast to maintain type safety.

🤖 Prompt for AI Agents
In src/index.ts around lines 93 to 132, the wakeup injection uses a type
assertion and calls (ctx.client.session as any).promptAsync(...), which hides a
method-name/type mismatch; replace that call with the SDK's correct session
method (call ctx.client.session.prompt(...) without the `as any` cast) and
ensure you await the returned Promise if prompt is async, or adapt to the actual
SDK signature if it returns synchronously; if promptAsync was intentional,
remove the `as any` cast and import/upgrade the SDK so the method exists with
proper typing.

- Add vision-router.ts for automatic image detection and routing to vision worker
- Support clipboard, file URLs, data URLs, and base64 image formats
- Add global lock to prevent concurrent vision processing
- Add detailed toast notifications for debugging vision pipeline
- Allow explicit model references from api-source providers in hydration
- Pass orchestrator profiles to vision router for correct model selection
- Add comprehensive test suite for vision routing (28 tests)
- Fix biome config schema (use object format instead of boolean)
- Add language identifier to workflow diagram code block
- Fix CODEOWNERS to target file paths, not branches
- Add language identifiers to all fenced code blocks in docs
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/index.ts (1)

124-128: SDK method usage concern already flagged.

The (ctx.client.session as any).promptAsync() pattern with type assertion was identified in a previous review. The same pattern appears again at line 382-386.

🧹 Nitpick comments (7)
src/models/hydrate.ts (1)

40-48: Unused variable: explicitlyReferencedProviders is computed but never consumed.

The Set is built by iterating over all profiles and collecting provider IDs, but it's not used anywhere in the function. The explicit model resolution on line 103 already uses providersAll directly.

Either remove this dead code or complete the intended feature.

🔧 If this is dead code, remove it
-  // Collect provider IDs explicitly referenced in profile models (user intent = use them)
-  const explicitlyReferencedProviders = new Set<string>();
-  for (const profile of Object.values(input.profiles)) {
-    const model = profile.model.trim();
-    if (model.includes("/") && !model.startsWith("auto") && !model.startsWith("node")) {
-      const providerID = model.split("/")[0];
-      explicitlyReferencedProviders.add(providerID);
-    }
-  }
-
src/index.ts (1)

368-393: Fire-and-forget pattern with silent error handling.

The vision analysis runs as a detached promise. While errors trigger toasts (line 391), the catch block at line 387 silently swallows injection failures without any logging. Consider adding debug-level logging for troubleshooting:

             } catch {
-              // Ignore injection errors
+              // Injection can fail if session ended - log for debugging
+              // Could add: logger.debug("Vision injection failed", { sessionID: input.sessionID });
             }
test/vision-routing.test.ts (1)

131-137: Missing cleanup for temporary directory.

The tempDir is created in beforeAll but never cleaned up in afterAll. Consider adding cleanup to avoid accumulating temp files:

     beforeAll(async () => {
       tempDir = await mkdtemp(join(tmpdir(), "vision-test-"));
     });
+
+    afterAll(async () => {
+      const { rm } = await import("node:fs/promises");
+      await rm(tempDir, { recursive: true, force: true }).catch(() => {});
+    });
src/ux/vision-router.ts (4)

17-20: The dual-lock mechanism works due to JavaScript's single-threaded event model, but the design could be clearer.

The visionLockSync flag works because JavaScript is single-threaded—between any two await points, no other code can interleave. The current approach rejects concurrent requests rather than queuing them. This is acceptable behavior, but consider documenting this rejection-based behavior more explicitly, or using a simple mutex/semaphore pattern if queuing becomes desirable later.


80-87: Data URL regex only handles base64-encoded images.

The regex /^data:(image\/[^;]+);base64,(.*)$/ requires the ;base64, portion, so non-base64 data URLs (e.g., URL-encoded data) would be silently skipped. This is likely fine for typical image use cases, but worth noting if broader data URL support is expected later.


250-275: Lock management in finally block works correctly but could be clearer.

The lock release sequence (nullify visionLock, reset visionLockSync, then call releaseLock()) is functionally correct. However, any promise consumers awaiting the old visionLock value will still resolve properly since releaseLock() is called. The order could be reordered for clarity: call releaseLock() before nullifying visionLock.

🔎 Suggested reordering for clarity
   } finally {
     // Release lock
+    releaseLock!();
     visionLock = null;
     visionLockSync = false;
-    releaseLock!();
   }

169-182: Consider documenting the options parameter types more explicitly.

The client and profiles parameters use any types. While this provides flexibility, documenting the expected shapes or using more specific types would improve maintainability and catch misuse at compile time.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cd66710 and 07ce9ed.

📒 Files selected for processing (8)
  • docs/architecture.md
  • docs/guide.md
  • src/config/profiles.ts
  • src/index.ts
  • src/models/hydrate.ts
  • src/types/index.ts
  • src/ux/vision-router.ts
  • test/vision-routing.test.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/config/profiles.ts
  • src/types/index.ts
🧰 Additional context used
🧬 Code graph analysis (4)
src/ux/vision-router.ts (3)
src/core/registry.ts (1)
  • registry (214-214)
src/config/profiles.ts (1)
  • builtInProfiles (10-136)
src/workers/spawner.ts (1)
  • sendToWorker (762-882)
test/vision-routing.test.ts (6)
src/ux/vision-router.ts (2)
  • hasImages (25-35)
  • analyzeImages (169-276)
src/workers/prompt.ts (1)
  • buildPromptParts (25-55)
src/models/hydrate.ts (1)
  • hydrateProfileModelsFromOpencode (20-141)
src/core/registry.ts (1)
  • registry (214-214)
src/workers/spawner.ts (2)
  • stopWorker (729-757)
  • sendToWorker (762-882)
src/types/index.ts (1)
  • WorkerProfile (7-36)
src/models/hydrate.ts (1)
src/models/catalog.ts (1)
  • flattenProviders (29-55)
src/index.ts (2)
src/core/message-bus.ts (1)
  • messageBus (130-130)
src/ux/vision-router.ts (2)
  • hasImages (25-35)
  • analyzeImages (169-276)
🔇 Additional comments (18)
src/models/hydrate.ts (1)

35-37: LGTM!

The comments clearly document the design intent: auto-selection tags (node:, auto:) use configured providers only, while explicit model references trust the user's choice and resolve against all providers. The implementation correctly reflects this.

Also applies to: 101-110

docs/guide.md (1)

87-87: LGTM!

The Vision Analyst model example is updated to match the new default vision model in the profiles configuration.

docs/architecture.md (1)

454-454: LGTM!

Architecture documentation updated to reflect the new default vision model, consistent with the profiles and guide documentation.

src/index.ts (3)

100-109: LGTM!

The wakeup event handler cleanly maps reason types to appropriate toast messages with sensible fallback handling.


334-366: LGTM!

The image filtering logic correctly removes image parts for non-vision agents and appends a helpful hint. This prevents providers from erroring on unexpected image inputs while keeping users informed.


429-429: LGTM!

Proper cleanup of the wakeup listener subscription when the server is disposed.

test/vision-routing.test.ts (7)

25-28: LGTM!

Minimal 1x1 PNG test fixture is a good choice for testing image handling without bloating test files.


30-129: LGTM!

Comprehensive unit tests for hasImages covering all detection paths including positive cases, negative cases, and edge cases with malformed input.


240-381: LGTM!

Model resolution tests cover important edge cases: explicit API provider models, vision capability validation, and auto-selection provider filtering. The mock client setup is clean.


415-439: Concurrency test assertion is permissive but pragmatic.

The test correctly handles timing uncertainty. The permissive assertion at lines 436-438 acknowledges that both calls might complete before the lock check if timing aligns. This is acceptable for testing the lock mechanism exists without flaky failures.


443-468: LGTM!

E2E test setup with proper environment restoration and worker cleanup in afterEach. The empty catch blocks in cleanup code are acceptable since we don't want cleanup failures to fail tests.


474-521: LGTM!

E2E test properly manages resources with try/finally for server cleanup and uses an appropriate 120-second timeout for vision model operations.


564-609: LGTM!

Config integration tests verify that custom profiles are passed through to analyzeImages and that built-in profiles serve as fallback. The error expectations correctly validate the intended behavior paths.

src/ux/vision-router.ts (5)

1-15: Imports are well-organized and appropriate for the module's functionality.

The imports cover all necessary Node.js modules for file I/O, clipboard access, and internal dependencies for worker management.


25-35: LGTM!

The function correctly detects various image formats including explicit image types, file attachments with image MIME types, data URLs, and clipboard placeholders. The defensive null/type checks are appropriate.


161-164: Windows clipboard reading is not implemented.

The function returns undefined on Windows. If Windows support is needed, consider using PowerShell or a native module to read clipboard images.

Is Windows support for clipboard image reading a requirement for this feature? If so, this would need to be addressed before release.


221-243: Worker spawning logic handles edge cases well.

The fallback from user-configured profiles to built-in profiles, combined with proper error handling and toast notifications, provides good observability for debugging vision worker issues.


258-261: No issue identified. The visionWorker.profile.id parameter is the correct worker identifier for sendToWorker(). Workers are registered in the registry using their profile.id, and sendToWorker() retrieves workers via registry.getWorker(workerId) where the workerId matches this identifier. This pattern is used consistently throughout the codebase.

Comment on lines 114 to 122
const script = [
`set outPath to "${outPath.replace(/"/g, '\\"')}"`,
`set outFile to POSIX file outPath`,
`set f to open for access outFile with write permission`,
`set eof f to 0`,
`write (the clipboard as «class PNGf») to f`,
`close access f`,
`return outPath`,
].join("\n");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential AppleScript injection if tmpdir() contains special characters.

The string escaping only handles double quotes. If tmpdir() returns a path with other AppleScript-sensitive characters (e.g., backslashes on edge-case systems), the script could fail or behave unexpectedly. This is low-risk since tmpdir() typically returns safe paths, but consider using a more robust escaping approach or validating the path.

🔎 Suggested fix with additional escaping
     const script = [
-      `set outPath to "${outPath.replace(/"/g, '\\"')}"`,
+      `set outPath to "${outPath.replace(/\\/g, '\\\\').replace(/"/g, '\\"')}"`,
       `set outFile to POSIX file outPath`,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const script = [
`set outPath to "${outPath.replace(/"/g, '\\"')}"`,
`set outFile to POSIX file outPath`,
`set f to open for access outFile with write permission`,
`set eof f to 0`,
`write (the clipboard as «class PNGf») to f`,
`close access f`,
`return outPath`,
].join("\n");
const script = [
`set outPath to "${outPath.replace(/\\/g, '\\\\').replace(/"/g, '\\"')}"`,
`set outFile to POSIX file outPath`,
`set f to open for access outFile with write permission`,
`set eof f to 0`,
`write (the clipboard as «class PNGf») to f`,
`close access f`,
`return outPath`,
].join("\n");
🤖 Prompt for AI Agents
In src/ux/vision-router.ts around lines 114-122, the AppleScript literal
currently only escapes double quotes which can still break when tmpdir()
contains other special characters; change the script to use AppleScript's quoted
form for the POSIX path and also escape backslashes/newlines before embedding:
build outPathEscaped by replacing backslashes with \\ and newlines with \\n (and
keep replacing " with \\"), then use `set outFile to POSIX file (quoted form of
"<outPathEscaped>")` instead of `set outPath to "..."` so the path is passed
safely into AppleScript.

0xSero and others added 4 commits December 22, 2025 12:15
- Add environment variables for ANTHROPIC, OPENAI, OPENROUTER API keys
- Run unit tests always
- Run E2E tests only when API keys are available
- E2E tests continue-on-error to not block PR merges
- No API keys required
- Uses opencode/gpt-5-nano which is free and always available
- Removed conditional E2E test execution
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
src/index.ts (1)

155-163: SDK method usage flagged in previous review.

This code uses (ctx.client.session as any).promptAsync() with a type assertion. As noted in the past review, verify that promptAsync exists in your SDK version or use the correct method name (potentially prompt()).

🧹 Nitpick comments (14)
src/core/passthrough.ts (1)

52-64: Sync exit phrases between detection and prompt.

The prompt documents exit phrases "exit passthrough", "exit docs mode", or "back", but isPassthroughExitMessage accepts additional phrases like "stop docs", "exit docs", "stop passthrough", "disable passthrough", and "exit passthrough mode". Consider either:

  1. Updating the prompt to mention all valid exit phrases, or
  2. Removing uncommonly-used phrases from the Set to match the prompt

This prevents user confusion if they say "stop passthrough" (which works) but wasn't documented in the system prompt.

src/core/bridge-server.ts (1)

82-96: Consider gating wakeup emission on actual content presence.

The wakeup is emitted whenever body.jobId is truthy, regardless of whether body.report or body.final contain meaningful content. This could generate unnecessary wakeup events with a generic "report received" summary when a worker sends an empty or progress-only report.

🔎 Proposed refinement
-      if (body.jobId) {
+      if (body.jobId && (body.report || body.final)) {
         const job = workerJobs.get(body.jobId);
         messageBus.wakeup({
src/core/jobs.ts (1)

34-48: LGTM! The session and requester tracking fields are a good addition.

The optional sessionId and requestedBy fields enable proper correlation with the wakeup/injection flows introduced elsewhere in this PR. The backward-compatible approach is appropriate.

Minor style nit: the conditional spread pattern could be simplified, but the current approach is explicit and works correctly.

🔎 Optional simplification
     const job: WorkerJob = {
       id,
       workerId: input.workerId,
       message: input.message,
-      ...(input.sessionId ? { sessionId: input.sessionId } : {}),
-      ...(input.requestedBy ? { requestedBy: input.requestedBy } : {}),
+      sessionId: input.sessionId,
+      requestedBy: input.requestedBy,
       status: "running",
       startedAt: Date.now(),
     };

This includes the keys with undefined values, which is typically fine for internal objects and slightly more concise. However, if you prefer to omit keys entirely when not provided, the current approach is valid.

src/memory/inject.ts (1)

91-99: Consider documenting the filtering behavior.

The gather function filters out both message-like nodes (unless explicitly included) and auto-scaffold nodes (always). While the isAutoScaffold function's logic is clear from the code, a brief inline comment explaining why these are excluded could help future maintainers understand the rationale (e.g., "Exclude auto-generated metadata nodes to focus on user-relevant content").

src/memory/auto.ts (2)

125-169: Rolling summaries perform multiple DB round-trips per message.

For each message with summaries enabled:

  1. getMemoryByKey for project summary
  2. upsertMemory for project summary
  3. getMemoryByKey for session summary
  4. upsertMemory for session summary
  5. getMemoryByKey for global summary
  6. upsertMemory for global summary

This is 6 database operations per message in the project scope case. Consider batching or deferring summary updates if message volume is high.


171-213: Trimming on every message write could impact performance at scale.

The trimming operations run after every recordMessageMemory call. For high-volume scenarios, this could create performance bottlenecks. Consider:

  1. Batching trim operations (e.g., every N messages)
  2. Using a background job/worker for trimming
  3. Adding a skip option to bypass trimming for bulk operations

For the current use case, this is likely acceptable.

src/ux/vision-router.ts (1)

67-81: Redundant dynamic import could be simplified.

The hasImages is already re-exported at line 23, but line 68 uses a dynamic import. This works but is inconsistent. Consider using the static import for both hasImages and extractImages if extractImages is also exported from the analyzer.

-  const { hasImages: checkHasImages, extractImages } = await import("../vision/analyzer");
-  if (!checkHasImages(parts)) {
+  const { extractImages } = await import("../vision/analyzer");
+  // hasImages is already imported via re-export
+  const { hasImages: staticHasImages } = await import("../vision/analyzer");
+  if (!staticHasImages(parts)) {

Alternatively, add extractImages to the static imports at the top.

src/index.ts (2)

135-139: Consider adding session existence validation before injection.

The wakeup injection checks the cooldown but doesn't verify the target session still exists before attempting to inject. If the session was deleted between resolving the ID and injecting, the silent catch on line 161 handles it, but logging the skip reason would aid debugging.


205-234: Vision message transform exits early without processing all messages.

The transform iterates messages in reverse but uses break statements that exit the entire loop after processing (or skipping) the first matching message. If the intent is to process only the most recent user message with images, this is correct. However, consider adding a comment clarifying this is intentional single-message processing.

src/vision/analyzer.ts (2)

159-218: Consider adding Windows clipboard support or documenting the limitation.

The readClipboardImage function supports macOS (AppleScript) and Linux (Wayland/X11) but silently returns null on Windows. Consider either adding Windows support via PowerShell or documenting this limitation.


331-338: Mutation of filtered array element - verify caller expectations.

The function modifies p.text of an element in withoutImages, which is a filtered copy of the input. Since filter creates a shallow copy, this mutates the original part object in parts. If callers expect parts to remain unchanged, this could cause issues.

Consider cloning the text part before modification:

🔎 Proposed fix
   for (let i = withoutImages.length - 1; i >= 0; i--) {
     const p = withoutImages[i];
     if (p?.type === "text" && typeof p.text === "string") {
-      p.text += `\n\n${analysisText}\n`;
+      withoutImages[i] = { ...p, text: p.text + `\n\n${analysisText}\n` };
       return withoutImages;
     }
   }
src/core/worker-pool.ts (3)

241-262: Performance concern: Sequential session detail fetches.

The code fetches details for up to 50 sessions sequentially using Promise.all with individual await calls inside the map. While Promise.all runs them concurrently, this could still be slow with many sessions.

Consider limiting the search or using a more efficient approach:

🔎 Suggested optimization
-        const fullSessions = await Promise.all(
-          sessions.slice(0, 50).map(async (s) => {
+        // Limit concurrent fetches to avoid overwhelming the server
+        const BATCH_SIZE = 10;
+        const fullSessions: Array<{ id: string; title?: string }> = [];
+        for (let i = 0; i < Math.min(sessions.length, 50); i += BATCH_SIZE) {
+          const batch = sessions.slice(i, i + BATCH_SIZE);
+          const results = await Promise.all(
+            batch.map(async (s) => {
             try {
               const detail = await client.session.get({ path: { id: s.id }, query: { directory: options.directory } });
               return detail.data as { id: string; title?: string } | undefined;
             } catch {
               return undefined;
             }
-          })
-        ).then((results) => results.filter((s): s is { id: string; title?: string } => !!s));
+            })
+          );
+          fullSessions.push(...results.filter((s): s is { id: string; title?: string } => !!s));
+          // Early exit if we found the worker session
+          if (fullSessions.some((s) => s.title === workerSessionTitle)) break;
+        }

471-479: Silently ignoring listener errors may hide bugs.

The emit function catches and ignores all listener errors. Consider at least logging these in debug mode to aid troubleshooting:

🔎 Proposed improvement
   private emit(event: WorkerPoolEvent, instance: WorkerInstance): void {
     this.listeners.get(event)?.forEach((cb) => {
       try {
         cb(instance);
-      } catch {
-        // Ignore listener errors
+      } catch (err) {
+        // Log in debug mode to aid troubleshooting
+        if (process.env.DEBUG) {
+          console.error(`[WorkerPool] Listener error for ${event}:`, err);
+        }
       }
     });
   }

402-441: Device registry read-modify-write lacks atomicity with concurrent orchestrators.

Multiple orchestrator instances can race when updating the same entry. Although writeJsonAtomic ensures atomic file replacement, the full cycle (read file → find/update entry → write file) is unprotected. If two orchestrators update concurrently, the second write overwrites the first, losing its update. The code acknowledges this as "best-effort," but file locking (as used in profile-lock.ts) could prevent data loss if stronger consistency is needed.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9459532 and 29b4d6f.

📒 Files selected for processing (39)
  • package.json
  • prompts/orchestrator.ts
  • schema/orchestrator.schema.json
  • src/config/orchestrator.ts
  • src/config/profiles.ts
  • src/core/bridge-server.ts
  • src/core/device-registry.ts
  • src/core/file-monitor.ts
  • src/core/jobs.ts
  • src/core/message-bus.ts
  • src/core/passthrough.ts
  • src/core/progress.ts
  • src/core/registry.ts
  • src/core/worker-pool.ts
  • src/index.ts
  • src/memory/auto.ts
  • src/memory/graph.ts
  • src/memory/inject.ts
  • src/memory/text.ts
  • src/tools/index.ts
  • src/tools/tools-ux.ts
  • src/tools/tools-workers.ts
  • src/types/index.ts
  • src/ux/vision-router.ts
  • src/vision/analyzer.ts
  • src/worker-bridge-plugin.mjs
  • src/workers/spawner.ts
  • test/fixtures/orchestrator-configs/high-concurrency.json
  • test/fixtures/profiles/slow-worker.json
  • test/fixtures/profiles/test-worker.json
  • test/helpers/fixtures.ts
  • test/memory-auto.test.ts
  • test/performance/memory-text.bench.ts
  • test/unit/core/bridge-server.test.ts
  • test/unit/core/message-bus.test.ts
  • test/unit/core/passthrough.test.ts
  • test/unit/memory/text.test.ts
  • test/unit/vision/analyzer.test.ts
  • test/vision-routing.test.ts
✅ Files skipped from review due to trivial changes (1)
  • src/core/file-monitor.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/core/device-registry.ts
🧰 Additional context used
🧬 Code graph analysis (19)
src/core/bridge-server.ts (3)
src/core/jobs.ts (1)
  • workerJobs (151-151)
src/core/message-bus.ts (1)
  • messageBus (196-196)
src/core/registry.ts (1)
  • registry (10-10)
test/unit/vision/analyzer.test.ts (2)
src/ux/vision-router.ts (4)
  • hasImages (23-23)
  • analyzeImages (55-148)
  • formatVisionAnalysis (24-24)
  • replaceImagesWithAnalysis (25-25)
src/vision/analyzer.ts (5)
  • hasImages (78-81)
  • extractImages (90-99)
  • analyzeImages (229-297)
  • formatVisionAnalysis (306-316)
  • replaceImagesWithAnalysis (321-351)
src/config/orchestrator.ts (1)
src/helpers/format.ts (1)
  • isPlainObject (61-61)
src/core/passthrough.ts (1)
src/worker-bridge-plugin.mjs (3)
  • workerId (6-6)
  • text (34-34)
  • text (57-57)
src/memory/inject.ts (3)
src/memory/text.ts (1)
  • shortenWithMarker (29-37)
src/memory/graph.ts (4)
  • MemoryNode (7-15)
  • MemoryScope (5-5)
  • getMemoryByKey (114-143)
  • recentMemory (181-210)
src/memory/neo4j.ts (2)
  • Neo4jConfig (3-8)
  • loadNeo4jConfigFromEnv (17-25)
test/memory-auto.test.ts (3)
src/memory/neo4j.ts (3)
  • Neo4jConfig (3-8)
  • loadNeo4jConfigFromEnv (17-25)
  • withNeo4jSession (47-58)
src/memory/auto.ts (1)
  • recordMessageMemory (33-214)
src/memory/inject.ts (1)
  • buildMemoryInjection (31-122)
src/index.ts (6)
src/core/jobs.ts (1)
  • workerJobs (151-151)
src/core/message-bus.ts (1)
  • messageBus (196-196)
src/vision/analyzer.ts (4)
  • hasImages (78-81)
  • replaceImagesWithAnalysis (321-351)
  • analyzeImages (229-297)
  • formatVisionAnalysis (306-316)
prompts/orchestrator.ts (1)
  • orchestratorPrompt (8-37)
src/core/passthrough.ts (3)
  • getPassthrough (9-12)
  • buildPassthroughSystemPrompt (52-64)
  • isPassthroughExitMessage (33-50)
src/memory/inject.ts (1)
  • buildMemoryInjection (31-122)
test/unit/core/message-bus.test.ts (2)
src/core/message-bus.ts (1)
  • MessageBus (25-194)
src/worker-bridge-plugin.mjs (1)
  • after (106-106)
src/memory/auto.ts (3)
src/memory/neo4j.ts (2)
  • Neo4jConfig (3-8)
  • loadNeo4jConfigFromEnv (17-25)
src/memory/graph.ts (5)
  • upsertMemory (42-78)
  • linkMemory (80-112)
  • getMemoryByKey (114-143)
  • trimMemoryByKeyPrefix (212-274)
  • trimGlobalMessageProjects (276-314)
src/memory/text.ts (2)
  • normalizeForMemory (24-27)
  • appendRollingSummary (39-42)
src/tools/tools-workers.ts (4)
src/workers/spawner.ts (1)
  • sendToWorker (609-743)
src/tools/state.ts (1)
  • ToolContext (5-9)
src/core/jobs.ts (1)
  • workerJobs (151-151)
src/core/message-bus.ts (1)
  • messageBus (196-196)
src/core/registry.ts (1)
src/core/worker-pool.ts (2)
  • registry (598-619)
  • workerPool (595-595)
test/vision-routing.test.ts (2)
src/vision/analyzer.ts (3)
  • formatVisionAnalysis (306-316)
  • replaceImagesWithAnalysis (321-351)
  • hasImages (78-81)
src/models/hydrate.ts (1)
  • hydrateProfileModelsFromOpencode (20-141)
src/core/progress.ts (3)
src/core/logger.ts (1)
  • logger (49-54)
test/helpers/metrics.ts (1)
  • formatDuration (525-531)
src/vision/analyzer.ts (1)
  • createVisionProgress (357-357)
src/vision/analyzer.ts (2)
src/core/progress.ts (3)
  • ProgressHandle (30-39)
  • ToastFn (42-42)
  • createVisionProgress (224-267)
src/core/logger.ts (1)
  • logger (49-54)
test/unit/core/passthrough.test.ts (1)
src/core/passthrough.ts (6)
  • clearAllPassthrough (29-31)
  • getPassthrough (9-12)
  • setPassthrough (14-23)
  • clearPassthrough (25-27)
  • isPassthroughExitMessage (33-50)
  • buildPassthroughSystemPrompt (52-64)
src/memory/graph.ts (1)
src/memory/neo4j.ts (2)
  • withNeo4jSession (47-58)
  • Neo4jConfig (3-8)
src/core/message-bus.ts (1)
src/types/index.ts (1)
  • WakeupPayload (290-303)
test/unit/core/bridge-server.test.ts (2)
src/core/bridge-server.ts (1)
  • startBridgeServer (36-201)
src/core/message-bus.ts (1)
  • messageBus (196-196)
src/ux/vision-router.ts (4)
src/vision/analyzer.ts (3)
  • VisionResult (29-35)
  • analyzeImages (229-297)
  • createVisionProgress (357-357)
src/core/progress.ts (2)
  • ToastFn (42-42)
  • createVisionProgress (224-267)
src/core/registry.ts (1)
  • registry (10-10)
src/workers/spawner.ts (1)
  • spawnWorker (229-513)
🔇 Additional comments (98)
package.json (1)

26-27: LGTM!

The new bench script follows a clean pattern for running performance benchmarks. Using bun test with glob patterns for *.bench.ts files is a sensible approach that separates benchmarks from regular unit tests.

src/config/orchestrator.ts (3)

97-97: LGTM!

Parsing for wakeupInjection follows the established pattern for boolean UI options.


193-224: LGTM!

The expanded memory configuration parsing correctly:

  • Validates the parent object with isPlainObject before accessing nested properties
  • Type-checks each property before assignment
  • Follows the established pattern used elsewhere in this file (e.g., notifications.idle, workflows.roocodeBoomerang)

338-360: LGTM!

The new defaults for memory configuration are well-structured and align with the schema definitions. The values appear reasonable:

  • Summaries enabled by default with 2000 char limits
  • Trim limits provide sensible bounds for message retention
  • Inject settings favor summaries over raw messages (good for context efficiency)
src/core/passthrough.ts (3)

1-12: LGTM!

Clean session state management with proper null/undefined handling. The getPassthrough function correctly returns undefined for falsy session IDs.


14-23: setPassthrough preserves enabledAt on re-entry—verify this is intentional.

When updating an existing passthrough session with a different workerId, the code preserves the original enabledAt timestamp. This means switching workers mid-session won't reset the "enabled since" time. If you want fresh timestamps when changing workers, consider resetting enabledAt when workerId changes:

 const next: PassthroughState = {
   workerId,
-  enabledAt: existing?.enabledAt ?? Date.now(),
+  enabledAt: existing?.workerId === workerId ? (existing.enabledAt ?? Date.now()) : Date.now(),
   updatedAt: Date.now(),
 };

33-50: LGTM!

The strict phrase matching via Set is a good design choice—O(1) lookups and explicit control over exit triggers prevent accidental exits from conversational text containing words like "back".

schema/orchestrator.schema.json (2)

21-26: LGTM!

The wakeupInjection property is well-documented with a clear description of its purpose. Default true aligns with the config parser defaults.


132-166: LGTM!

The expanded memory schema properly defines all nested configuration blocks (summaries, trim, inject) with:

  • Consistent use of additionalProperties: false for strict validation
  • Sensible defaults matching the TypeScript implementation
  • Clear property names following existing conventions
src/types/index.ts (3)

34-35: LGTM!

The injectRepoContext property is a clear opt-in flag for docs workers that need repository context on launch.


116-148: LGTM!

The MemoryConfig type expansion is comprehensive and well-documented with inline comments explaining each section's purpose. The structure mirrors the schema and config defaults.


289-312: LGTM!

The WakeupPayload interface provides a well-structured contract for worker-to-orchestrator wakeup events:

  • Clear discriminated union for reason enables type-safe handling
  • Optional data field allows extensibility without breaking changes
  • The worker:wakeup event integration follows the existing pattern in OrchestratorEvents
src/core/progress.ts (2)

201-214: LGTM!

The formatDuration helper provides human-readable output. The singleton export is appropriate for centralized progress management.


224-266: LGTM!

The createVisionProgress helper provides a clean semantic API for vision-specific progress flows. The decision to create a dedicated ProgressManager when a custom toast is provided ensures isolation and prevents interference with the global singleton.

test/fixtures/orchestrator-configs/high-concurrency.json (1)

38-70: LGTM!

Consistent model update across all stress worker profiles. The opencode/gpt-5-nano naming follows the provider/model convention used elsewhere in the codebase.

test/fixtures/profiles/slow-worker.json (1)

4-4: LGTM!

Model standardization aligns with other test fixture updates. The slow behavior is correctly defined in systemPrompt rather than relying on the model name.

test/unit/core/bridge-server.test.ts (1)

1-61: LGTM! Well-structured tests for bridge server messaging.

The tests effectively verify:

  • Message routing to orchestrator with jobId preservation
  • Wakeup event generation for both messages and reports
  • Integration between bridge endpoints and message bus

The setup/teardown is clean, and assertions cover both primary behavior and side effects.

test/helpers/fixtures.ts (1)

78-78: Model update aligns with test fixture standardization.

The model change to "opencode/gpt-5-nano" is consistent with the broader test fixture updates across the codebase. Verification of model availability was already requested for the related fixture files.

test/unit/memory/text.test.ts (1)

1-31: LGTM! Comprehensive unit tests for memory text utilities.

The tests cover the essential behaviors:

  • Code block stripping and secret redaction in normalizeForMemory
  • Hard length limits with trim indicators
  • Rolling summary boundary enforcement

Test assertions verify both presence/absence of expected content and length constraints.

test/unit/core/passthrough.test.ts (1)

1-37: LGTM! Clean tests for passthrough session management.

The tests effectively validate:

  • Session CRUD operations with proper state isolation via beforeEach
  • Exit command detection with case-insensitive matching
  • System prompt generation includes required markers for internal bypass logic

Test structure is clear and focused.

src/config/profiles.ts (3)

39-39: New public API field added to docs profile.

The injectRepoContext: true flag has been added to the docs worker profile. This is a new public API field that enables automatic repository context injection when the docs worker is spawned.

Ensure this behavior is documented and that downstream code properly handles this field.


20-27: System prompt improvements enhance clarity.

The expanded bullet-point formatting in system prompts across multiple profiles improves readability and makes worker responsibilities more explicit. These changes maintain the original intent while improving structure.

Also applies to: 44-51, 62-69, 85-92, 108-114, 127-135


15-15: The model identifier "zhipuai-coding-plan/glm-4.6v" is valid and vision-capable.

GLM-4.6V is a multimodal vision-language model with native support for images, video, and documents, with a 128K token context window. Ensure your deployment environment has access to this Zhipu AI model.

test/performance/memory-text.bench.ts (1)

1-38: LGTM! Baseline performance benchmarks established.

The benchmarks provide performance tracking for memory text utilities:

  • normalizeForMemory: 500 iterations with realistic mixed content (code blocks, secrets, repeated text)
  • appendRollingSummary: 500 iterations with large string inputs

The warmup iterations and basic validation (opsPerSecond > 0) are appropriate for establishing baseline metrics.

test/unit/core/message-bus.test.ts (4)

5-19: LGTM! Message handling and listener notification verified.

The test confirms:

  • jobId is preserved on messages
  • Listeners receive message notifications
  • Messages are stored and retrievable by recipient
  • Message ID is properly generated and propagated

21-34: LGTM! Terminal wakeup deduplication works correctly.

The test validates that duplicate terminal wakeups (result_ready, error) within the deduplication window:

  • Return the same event ID
  • Result in only one history entry
  • Trigger the listener only once

This prevents redundant orchestrator wakeups for the same job completion.


36-41: LGTM! Progress wakeups bypass deduplication.

The test confirms that progress updates are NOT deduplicated, allowing multiple progress notifications for the same job. This is correct behavior for streaming/incremental updates.


43-51: LGTM! Wakeup filtering logic validated.

The test verifies hasWakeup correctly filters by:

  • jobId (exact match)
  • reason (exact match)
  • after timestamp (events after threshold)
  • Negative cases (non-matching filters return false)
prompts/orchestrator.ts (1)

1-37: LGTM!

The orchestrator prompt is well-structured with clear sections for tool usage, delegation strategies, and the vision protocol. The explicit instruction to not use MCP tools directly and the clear vision handling protocol (pending vs completed analyses) aligns well with the wakeup/delegation architecture introduced in this PR.

src/core/bridge-server.ts (3)

101-124: LGTM!

The message handling correctly propagates jobId through the message bus and appropriately wakes the orchestrator when messages are addressed to it. The wakeup payload includes proper correlation data (messageId, sessionId) for traceability.


136-167: LGTM!

The wakeup endpoint is well-implemented with proper validation for required fields, worker activity tracking, and returns useful correlation data (id, timestamp) in the response.


169-181: LGTM!

The wakeup history endpoint correctly handles query parameter parsing with appropriate defaults and validation. The integration with messageBus.getWakeupHistory() is clean.

test/unit/vision/analyzer.test.ts (1)

1-339: LGTM! Comprehensive test coverage for the vision analyzer module.

The test suite provides excellent coverage across all exported functions with well-structured test cases covering:

  • Edge cases (null/undefined inputs, empty arrays)
  • Various image input formats (base64, data URLs, file parts, clipboard)
  • Happy paths and error handling
  • End-to-end integration workflows

The mock sendToVisionWorker pattern cleanly isolates the analyzer logic from the actual worker implementation.

src/worker-bridge-plugin.mjs (3)

10-14: LGTM!

The timeout configuration helper is well-implemented with sensible defaults and proper validation.


16-61: LGTM!

The timeout implementation using AbortController is correct. The finally block ensures the timer is always cleaned up, preventing potential resource leaks.


113-142: LGTM!

The wakeup_orchestrator tool is well-designed with a descriptive docstring and appropriate arguments. The enum for reason matches the types expected by the /v1/wakeup endpoint.

src/memory/text.ts (2)

29-37: LGTM! Note on edge case behavior.

The shorten algorithm is well-designed. For extremely small maxChars values (smaller than the marker itself), the output would be just the marker with no actual content. This is likely fine since callers typically use clamp() to enforce minimum bounds (e.g., 200 chars in buildMemoryInjection).


1-42: LGTM! Well-designed text processing utilities.

The module provides a cohesive set of utilities for memory text handling:

  • Secret redaction covers common patterns (API keys, tokens, PEM keys)
  • Code block stripping keeps summaries concise
  • The shorten algorithm intelligently preserves head/tail context
  • Rolling summary appends new entries while respecting length limits

The composition in normalizeForMemory follows a logical order (redact → strip → normalize → truncate).

test/memory-auto.test.ts (2)

35-110: LGTM! Robust Docker-based test infrastructure.

The Neo4j container setup is well-implemented:

  • Unique container naming prevents collisions across parallel test runs
  • Ephemeral port mapping enables test isolation
  • Comprehensive error handling with proper cleanup on all failure paths
  • Informative error messages include container logs for debugging

129-265: LGTM! Comprehensive integration test coverage.

The test effectively validates the full memory recording flow including:

  • Message recording and retrieval
  • Project/user node creation
  • Session and project trimming enforcement
  • Summary generation and bounds checking
  • Memory injection output bounds

The use of parameterized Cypher queries in assertions is good practice.

src/memory/inject.ts (1)

31-122: LGTM! Well-structured memory injection builder.

The function cleanly assembles memory context with:

  • Proper early exits for disabled/unconfigured states
  • Sensible defaults and bounds for all configuration options
  • Graceful error handling (swallows failures, returns best-effort output)
  • Clear markdown structure with appropriate section allocation

The allocation strategy (50% project summary, 35% session summary, remainder for notes) is reasonable for prioritizing high-level context.

src/tools/tools-workers.ts (3)

110-126: LGTM! Worker-to-worker communication support added.

The from parameter enables tracking message sources in inter-worker communication. The implementation correctly propagates this through to sendToWorker.


156-162: LGTM! Session context now tracked in async jobs.

Adding sessionId and requestedBy to job creation enables proper session-scoped handling and ownership tracking for the wakeup flow.


173-188: Verify the 250ms delay assumption for fallback wakeups.

The fallback wakeup logic correctly handles cases where workers don't call wakeup_orchestrator. However, the 250ms delay assumes worker-initiated wakeups will arrive within that window. If network latency or worker processing causes delays beyond 250ms, duplicate wakeups could occur (though hasWakeup check should mitigate this).

Consider whether 250ms is sufficient for all expected scenarios, or if this value should be configurable.

src/memory/graph.ts (5)

17-21: LGTM! Centralized projectId validation.

The requireProjectId helper provides consistent enforcement of the project scope contract - throwing early if projectId is missing when required.


54-68: LGTM! Dynamic merge pattern for scope-aware upserts.

The conditional pattern building correctly handles both global and project scopes, and the spread operator ensures projectId is only included in params when relevant.


114-143: LGTM! Key-based memory retrieval added.

The getMemoryByKey function follows the established pattern for scope-aware queries and correctly returns undefined when no record is found.


212-274: LGTM! Prefix-based trimming with keepLatest support.

The implementation correctly handles two cases:

  1. keepLatest=0: Deletes all matching nodes
  2. keepLatest>0: Orders by updatedAt DESC and deletes older entries

The use of DETACH DELETE ensures relationships are properly cleaned up.


276-314: No issue here—message keys follow a consistent, deterministic format.

The keys with "message:" prefix are always constructed in src/memory/auto.ts via the recordMessage function, which generates them as either message:{projectId}:{session}:{keyBase} (for global scope) or message:{session}:{keyBase} (for project scope). Since trimGlobalMessageProjects operates on global scope, the split on split(n.key, ':')[1] reliably extracts the projectId or "unknown" as expected. The format is enforced at the point of creation, so no validation is needed.

Likely an incorrect or invalid review comment.

src/memory/auto.ts (3)

29-31: LGTM! Simple clamp utility.

Standard numeric clamping function for bounding input values.


48-51: Key format differs between scopes - ensure consistency with trimming logic.

For global scope: message:${projectId}:${session}:${keyBase}
For project scope: message:${session}:${keyBase}

This aligns with the trimGlobalMessageProjects function which uses split(key, ':')[1] to extract projectId from global keys. The format is consistent.


77-103: LGTM! Global index mirroring for cross-project retrieval.

Maintaining lightweight global indexes for users and projects enables cross-project queries. The duplicate upserts (scope-specific and global) ensure data is accessible from both contexts.

test/vision-routing.test.ts (5)

1-28: LGTM! Well-structured test setup.

The test file includes appropriate imports, a minimal valid PNG fixture for testing, and clear organization into describe blocks.


77-176: Excellent coverage for hasImages detection.

The tests comprehensively cover:

  • Different image part types (type='image', type='file' with mime)
  • Various MIME types (png, jpeg, webp)
  • Data URLs and clipboard URLs
  • Negative cases (text-only, non-image files, empty arrays)
  • Malformed inputs (null, undefined, non-array)

232-254: Good MIME type inference tests.

Testing that file extensions (.jpg, .webp) correctly infer MIME types ensures the prompt building logic handles real-world file attachments properly.


462-499: Concurrent call tests verify serialization behavior.

The tests correctly verify that concurrent analyzeImages calls are handled gracefully without transient "busy" errors, and that deduplication works as expected with matching requestKey values.


517-527: LGTM! Proper test cleanup.

The afterEach hook ensures spawned workers are cleaned up between tests, preventing resource leaks and test interference.

src/core/registry.ts (2)

1-10: LGTM! Clean backwards-compatibility layer.

The module correctly documents its deprecated status and provides a compatibility shim by re-exporting from the new worker-pool module.


13-30: WorkerRegistry class correctly delegates to workerPool.

Using .bind(workerPool) ensures proper this context for all delegated methods. This maintains API compatibility while routing to the new implementation.

src/tools/tools-ux.ts (4)

43-50: LGTM! Auto-spawn logic for passthrough mode.

The code correctly checks if the worker exists and is not stopped before attempting to spawn. Silently catching spawn errors with .catch(() => {}) is appropriate here since passthrough can still work if the worker already exists.


63-97: LGTM! Generic passthrough mode tool.

setPassthroughMode generalizes enableDocsPassthrough to work with any worker, with proper validation and error messaging when the profile doesn't exist.


287-356: Comprehensive orchestrator output tool.

The orchestratorOutput tool provides a unified view of:

  • Wakeup events
  • Inter-agent messages
  • Job history
  • Internal logs

This is valuable for debugging and monitoring orchestrator activity.


614-671: LGTM! Wakeup history and clear tools.

The orchestratorWakeups tool provides good filtering options (workerId, limit, after, clear) and clear documentation of what wakeup reasons mean. The emoji-based formatting improves readability.

src/core/message-bus.ts (5)

14-23: LGTM! Well-defined wakeup types and constants.

The types clearly extend WakeupPayload with an id, and the constants provide reasonable defaults for history size (100) and dedupe window (1.5s).


41-53: LGTM! Message listener notification added.

The send method now notifies registered message listeners. Error handling correctly swallows listener errors to prevent one bad listener from blocking others.


87-133: Wakeup deduplication logic is sound.

The implementation:

  1. Only deduplicates terminal events (result_ready, error) with a jobId
  2. Uses a time-based window (1.5s) to coalesce rapid duplicates
  3. Opportunistically prunes expired entries to prevent memory growth
  4. Returns the cached event for duplicates, ensuring consistent IDs

This prevents redundant wakeups while allowing progress updates.


144-153: hasWakeup filter logic is correct.

The method correctly applies optional filters (workerId, jobId, reason, after) with AND semantics, returning true if any matching event exists.


188-193: Dedupe key excludes progress events intentionally.

Only result_ready and error reasons generate dedupe keys, allowing multiple progress updates for the same job. This is the correct behavior for progress tracking.

src/ux/vision-router.ts (5)

1-26: LGTM! Clear deprecation and backwards-compat re-exports.

The module clearly documents its deprecated status and provides re-exports from the new analyzer module for existing consumers.


83-110: Vision worker selection logic is sound.

The code correctly:

  1. Prefers a ready vision worker
  2. Falls back to any vision worker
  3. Spawns a new one if needed and allowed
  4. Returns a clear error if none available

112-123: LGTM! Worker readiness wait with timeout.

The code waits for the worker to become ready with a capped timeout (5 minutes max), providing appropriate error feedback if it times out.


154-176: LGTM! Clean registry-based readiness observer.

The waitForWorkerReady helper correctly:

  1. Checks existing state first
  2. Sets up an event listener for updates
  3. Cleans up the listener on completion or timeout
  4. Uses a done flag to prevent race conditions

182-198: Diagnostics returns placeholder values for removed features.

The queueDepth and inFlightRequests return 0 with comments noting they're no longer used. This maintains API compatibility while indicating the values are no longer meaningful.

src/tools/index.ts (3)

46-61: LGTM! New tool exports are properly added.

The new exports (clearWakeups, clearPassthroughMode, orchestratorOutput, setPassthroughMode, orchestratorWakeups) are correctly re-exported from ./tools-ux and align with the expanded orchestrator capabilities for wakeup and passthrough control.


134-144: LGTM! Core orchestrator tools surface expanded appropriately.

The additions to coreOrchestratorTools properly expose the new observability (orchestrator_output, orchestrator_wakeups) and passthrough control tools (set_passthrough, clear_passthrough, enable_docs_passthrough). The grouping with comments is clear.


179-183: LGTM! Plugin tools include wakeup capabilities.

The wakeup tools (orchestrator_wakeups, clear_wakeups) are correctly added to pluginTools for plugin-level access.

src/index.ts (6)

72-76: LGTM! Safe environment variable parsing with fallback.

The IIFE correctly parses OPENCODE_VISION_TIMEOUT_MS, validates it's a positive finite number, and falls back to 300,000ms (5 minutes) if invalid.


104-117: LGTM! Wakeup session tracking is well-structured.

The session ID resolution logic (resolveWakeupTargetSessionId) correctly prioritizes the event's session ID, then falls back to job-based lookup, then to the current orchestrator session.


456-509: LGTM! Async vision analysis with proper job tracking and error handling.

The vision analysis flow correctly:

  1. Creates a job for tracking
  2. Injects a placeholder immediately (non-blocking UX)
  3. Runs analysis asynchronously
  4. Updates job state on success/error
  5. Emits wakeup events for orchestrator resumption
  6. Has comprehensive error handling with toast notifications

536-540: LGTM! Memory recording configuration extended.

The new summaries and trim configuration options are correctly passed through to recordMessageMemory.


549-549: LGTM! Proper cleanup of wakeup listener on disposal.

The unsubscribeWakeup() call ensures the message bus listener is cleaned up when the server instance is disposed.


31-31: The import path "../prompts/orchestrator" is correct. The prompts/orchestrator.ts file is located at the repository root, and from src/index.ts, using ../prompts/orchestrator correctly navigates up from the src/ directory to access it. Changing it to ./prompts/orchestrator would be incorrect, as there is no src/prompts/ directory.

Likely an incorrect or invalid review comment.

src/vision/analyzer.ts (5)

1-11: LGTM! Clear module documentation explaining simplifications.

The header comment clearly documents the design decisions: removing internal queue, deduplication, and logging in favor of delegating to worker-pool and progress API. This is a good architectural simplification.


66-81: LGTM! Comprehensive image part detection.

The isImagePart function correctly identifies images from multiple formats: direct type: "image", file parts with image MIME types, data URLs, and clipboard references. The hasImages wrapper safely handles non-array inputs.


90-99: LGTM! Parallel image extraction for performance.

Using Promise.all for parallel extraction is appropriate here since image reads are I/O-bound and independent.


229-297: LGTM! Well-structured analysis workflow with progress tracking.

The analyzeImages function has:

  • Proper timeout configuration with sensible default (5 minutes)
  • Progress updates at logical stages
  • Clean success/error result formatting
  • Duration tracking for observability

306-316: LGTM! Clean result formatting with trimming.

The formatter correctly handles success cases with trimming and error cases, returning undefined for empty/invalid results.

src/workers/spawner.ts (7)

46-72: LGTM! Plugin path resolution with proper file URL handling.

The resolver correctly:

  1. Checks user config directory first
  2. Falls back to relative paths from the module
  3. Uses pathToFileURL to avoid package specifier misinterpretation by Bun

74-92: LGTM! Worker bridge tool verification with clear error messages.

The checkWorkerBridgeTools function properly verifies required tools (message_tool, worker_inbox, wakeup_orchestrator) are available, with descriptive error extraction from the SDK response.


186-199: LGTM! Robust process group termination for cleanup.

The close function correctly:

  1. Kills the process group on POSIX (-pid) to terminate children
  2. Falls back to individual process kill on Windows or if group kill fails
  3. Has a 2-second grace period before SIGKILL

324-334: LGTM! Preflight tool verification with actionable error messages.

The error message when bridge tools are missing includes:

  • Which tools are missing
  • Which tools were loaded
  • The plugin path being used
    This provides excellent debuggability.

411-463: LGTM! Clear worker prompt with communication protocol documentation.

The injected prompt provides workers with:

  1. Available communication tools and their purposes
  2. Required behavior for job reporting
  3. Fallback guidance if tools fail
    This is well-structured and actionable for the LLM.

640-664: LGTM! Source context and job instructions in worker messages.

The message construction properly:

  1. Identifies the source (orchestrator or worker) in a parseable XML block
  2. Adds job-specific instructions when a jobId is present
  3. Uses sync-mode instructions for non-job requests to prevent unnecessary tool calls

97-104: No action needed. The review comment is based on an incorrect premise: worker-pool.ts does not exist in the repository. Additionally, the functions are not duplicates—the version in src/helpers/process.ts has an extended signature with an options?: IsProcessAliveOptions parameter and includes special handling for EPERM errors, while the version in spawner.ts is a simpler utility used locally within that module.

Likely an incorrect or invalid review comment.

src/core/worker-pool.ts (4)

1-14: LGTM! Clear module documentation explaining consolidation.

The header effectively communicates that this module consolidates three previous modules (registry, device-registry, profile-lock) and lists the key improvements: single spawn gate, warm worker cache, automatic cleanup, and event-based updates.


157-192: LGTM! Spawn deduplication prevents concurrent duplicate spawns.

The getOrSpawn method properly:

  1. Checks in-memory registry first
  2. Returns in-flight spawn promise if exists (deduplication)
  3. Tries device registry reuse
  4. Creates new spawn with promise tracking
  5. Cleans up tracking in finally block

595-619: LGTM! Backwards compatibility layer for gradual migration.

The registry export provides a compatibility shim that delegates to workerPool, allowing existing code to continue working while consumers migrate to the new API.


554-571: LGTM! Cleanup uses Promise.allSettled for resilience.

The stopAll method correctly uses Promise.allSettled to ensure all workers are attempted to be stopped even if some fail. State is properly cleared after cleanup.

Comment on lines +99 to +112
const handle: ProgressHandle = {
update: (status: string, percent?: number) => {
this.updateProgress(id, status, percent);
},
complete: (message?: string) => {
this.completeProgress(id, message);
},
fail: (error: string) => {
this.failProgress(id, error);
},
get: () => ({ ...progress }),
};

return handle;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

get() returns stale snapshot—update to fetch current state.

The get() method captures progress from the closure at creation time and returns a spread of that initial object. However, update, complete, and fail mutate the object stored in this.active, not the closure's progress reference. This causes get() to return outdated data.

🔎 Proposed fix
     const handle: ProgressHandle = {
       update: (status: string, percent?: number) => {
         this.updateProgress(id, status, percent);
       },
       complete: (message?: string) => {
         this.completeProgress(id, message);
       },
       fail: (error: string) => {
         this.failProgress(id, error);
       },
-      get: () => ({ ...progress }),
+      get: () => {
+        const current = this.active.get(id);
+        return current ? { ...current } : { ...progress };
+      },
     };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const handle: ProgressHandle = {
update: (status: string, percent?: number) => {
this.updateProgress(id, status, percent);
},
complete: (message?: string) => {
this.completeProgress(id, message);
},
fail: (error: string) => {
this.failProgress(id, error);
},
get: () => ({ ...progress }),
};
return handle;
const handle: ProgressHandle = {
update: (status: string, percent?: number) => {
this.updateProgress(id, status, percent);
},
complete: (message?: string) => {
this.completeProgress(id, message);
},
fail: (error: string) => {
this.failProgress(id, error);
},
get: () => {
const current = this.active.get(id);
return current ? { ...current } : { ...progress };
},
};
return handle;
🤖 Prompt for AI Agents
In src/core/progress.ts around lines 99 to 112, the returned handle's get()
currently closes over the initial `progress` snapshot and returns that stale
copy; change get() to read the up-to-date progress from this.active[id] (or
return a safe empty/default if missing) and return a shallow copy of that live
object so updates via update/complete/fail are reflected immediately.

"id": "test-worker",
"name": "Test Worker",
"model": "test-model",
"model": "opencode/gpt-5-nano",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

opencode/gpt-5-nano model identifier OpenRouter API

💡 Result:

Use the OpenRouter model ID "openai/gpt-5-nano". See OpenRouter docs/examples showing model parameter as "openai/gpt-5-nano". [1][2][3]

Sources:

  • OpenRouter SDK callModel example (model parameter example "openai/gpt-5-nano"). [1]
  • OpenRouter text-generation example using model: 'openai/gpt-5-nano'. [2]
  • OpenRouter models/quickstart pages referencing the OpenAI GPT‑5 Nano listing. [3]

Update model identifier from "opencode/gpt-5-nano" to "openai/gpt-5-nano".

The model identifier is incorrect. OpenRouter documentation confirms the valid model ID is "openai/gpt-5-nano" with the "openai/" prefix, not "opencode/". Using the current identifier will cause API calls to fail with an invalid model error.

🤖 Prompt for AI Agents
In test/fixtures/profiles/test-worker.json around line 4, the model identifier
is incorrect ("opencode/gpt-5-nano"); update that value to "openai/gpt-5-nano"
so API calls use the valid OpenRouter model ID; ensure the JSON string is
properly quoted and the file remains valid JSON after the change.

@0xSero
Copy link
Owner Author

0xSero commented Dec 23, 2025

Superseded by PR #5 which merged v0.2.3 directly to main. The release branch is no longer needed.

@0xSero 0xSero closed this Dec 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants