Skip to content

Commit 3bce84d

Browse files
wenshaoclaudeqwencoderyiliang114
authored
feat(cli, webui): add follow-up suggestions feature (#2525)
* feat(cli, webui): add follow-up suggestions feature Implement context-aware follow-up suggestions that appear after task completion, suggesting relevant next actions like "commit this", "run tests", etc. - Add `followup/` module with types, generator, and rule-based provider - Export follow-up types and functions from core index - 8 default suggestion rules covering common workflows - Add `useFollowupSuggestionsCLI` hook for Ink/React - Integrate suggestion generation in AppContainer when streaming completes - Add Tab key to accept, arrow keys to cycle through suggestions - Display suggestions as ghost text in input prompt - Add `useFollowupSuggestions` hook for React - Update InputForm to display suggestions as placeholder - Add CSS styling for suggestion appearance with counter - Add keyboard handlers (Tab, arrow keys) - After streaming completes with tool calls, suggestions appear - Tab accepts the current suggestion - Left/Right arrows cycle through multiple suggestions - Typing or pasting dismisses the suggestion - Shell command rules (tests, git, npm install) don't work yet due to history not storing tool arguments - VSCode extension integration pending - Web UI needs parent app integration for suggestion generation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: resolve merge conflicts and build errors - Rebased on upstream main (5d02260) - Fixed JSX structure in InputPrompt.tsx - Changed `return;` to `return true;` in follow-up handlers - Added @agentclientprotocol/sdk to core package dependencies - Restored correct BaseTextInput usage (self-closing, no children) - Follow-up suggestions now shown via placeholder prop only Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: remove @agentclientprotocol/sdk from core package.json The types are imported in fileSystemService.ts but the package should not be a runtime dependency of core. It's provided by the CLI package which depends on core. This was causing package-lock.json sync issues on Node.js 24.x CI. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: restore alphabetical order of dependencies in core/package.json * fix: restore package-lock.json from upstream to fix Node 24.x CI * fix: resolve acpConnection test failure and ESLint warning Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> * style: apply prettier formatting after merge Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> * fix(followup): address review issues in follow-up suggestions - Export followupState.ts from core index (was dead code) - Refactor CLI and WebUI hooks to use shared followupReducers (eliminate duplication) - Move side effects out of setState updaters via queueMicrotask - Fix AppContainer useEffect dependency on unstable historyManager.history reference - Reorder matchesRule to check pattern before condition (cheaper first) - Make RuleBasedProvider collect from all matching rules with dedup and limit - Add missing resetGenerator export for testing - Add explicit implements SuggestionProvider to RuleBasedProvider - Fix unstable followup object in useEffect dependency arrays - Merge duplicate imports to fix eslint import/no-duplicates warnings - Standardize copyright year to 2025 - Add test files for followupState, ruleBasedProvider, suggestionGenerator Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address review feedback from PR #2525 - Fix acceptingRef race: set lock synchronously before queueMicrotask - Derive hasError/wasCancelled from actual tool call statuses - Incorporate rule priority into suggestion priority calculation - Clear suggestions immediately when setSuggestions([]) is called - Add !completion.showSuggestions guard to Tab handler - Fix onAcceptFollowup type from (string) => void to () => void - Fix ToolCallInfo.name doc examples to match display names - Scope CSS counter ::after to data-has-suggestion + empty conditions - Reset regex lastIndex before test() for g/y flag safety - Stabilize hook return with useMemo + onAcceptRef pattern - Add @qwen-code/qwen-code-core as webui external + peerDependency Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address second round of review feedback - Scope CSS max-width to match counter condition (not count=1) - Only dismiss followup on printable character input, not navigation keys - Restrict tool_group scan to most recent contiguous block (current turn) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): clear suggestions on new turn, add search guards - Clear followupSuggestions when streaming starts (Idle → Responding) to prevent stale suggestions from previous turns - Add !reverseSearchActive && !commandSearchActive guards to Tab handler to avoid keybinding conflicts with search modes Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address third round of review feedback - Fix string pattern asymmetry: only match tool names when matchMessage=false - Collect tool_groups from last user message boundary, not contiguous tail - Flatten to individual tool calls before slicing to cap at 10 actual calls Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): fix arrow cycling guard and align rule conditions with patterns - Remove unreliable textContent check for arrow cycling in WebUI InputForm; rely on inputText state which already accounts for zero-width spaces - Add 'error' to fix/bug rule condition to match its regex pattern - Add 'clean up' to refactor rule condition to match its regex pattern Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): reset acceptingRef in clear() to prevent deadlock If clear() is called during accept debounce window, acceptingRef could remain stuck true permanently. Now reset in clear(). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): cancel pending timeout in dismiss() and accept() Prevents stale suggestion timeout from re-showing suggestions after user dismisses or accepts during the 300ms delay window. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): reset lastIndex in removeRules() for g/y flag safety Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(vscode-ide-companion): mark @qwen-code/qwen-code-core as external in webview esbuild The webui package now declares @qwen-code/qwen-code-core as external in its vite build config. Without this change, the vscode-ide-companion webview esbuild (platform: 'browser') would try to bundle core's Node.js-only dependencies (undici, @grpc/grpc-js, fs, stream, etc.), causing 562 build errors during `npm ci`. * fix: restore node_modules/@google/gemini-cli-test-utils workspace link in lockfile The top-level workspace symlink entry was accidentally removed by a local npm install in commit 004baae, which replaced it with a nested packages/cli/node_modules/ entry. npm ci requires the top-level link entry to be present in the lockfile, otherwise it fails with: "Missing: @google/gemini-cli-test-utils@0.13.0 from lock file" Also syncs @qwen-code/qwen-code-core peerDependency into the lockfile to match the updated packages/webui/package.json. * refactor(followup): extract controller and improve rule matching - Extract createFollowupController for unified state management across CLI and WebUI - Refactor rule-based provider to match via assistant message keywords instead of tool arguments - Add enableFollowupSuggestions user setting in UI category - Decouple WebUI from @qwen-code/qwen-code-core by copying browser-safe state logic - Add followupHistory.ts for extracting suggestion context from CLI history - Add comprehensive tests for controller and rule matching scenarios - Use --app-primary CSS variable for consistency Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> * refactor(webui): import followup state from core package - Remove followupState.ts from webui (moved to core) - Import FollowupSuggestion, FollowupState types from core - Add @qwen-code/qwen-code-core as peerDependency - Add core to vite external list - Update test to include id field in HistoryItem Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> * refactor(followup): simplify generator, revert unrelated changes - Collapse FollowupSuggestionsGenerator class into a single generateFollowupSuggestions() function (152 → 26 lines) - Inline extractSuggestionContext into followupHistory.ts - Remove unused RuleBasedProvider.addRule/removeRules methods - Revert unrelated acpConnection.test.ts refactor - Fix followupHistory.test.ts HistoryItem missing id field - Reduce test verbosity (162 → 36 lines for generator tests) * fix(followup): fix accept() deadlock and restore UMD globals mapping - Wrap queueMicrotask callback in try/catch/finally to prevent accepting lock from being permanently held when onAccept throws - Restore '@qwen-code/qwen-code-core': 'QwenCodeCore' in webui vite.config.ts globals (regression from d0f38a5) - Add test case verifying accept() recovers after callback exception * fix(followup): log accept callback errors instead of swallowing them Replace empty catch {} with console.error to ensure onAccept errors remain visible for debugging while still preventing deadlock via finally. Update test to verify error is logged. * refactor(webui): move followup hook to separate subpath entry Move useFollowupSuggestions from the root entry to a dedicated '@qwen-code/webui/followup' subpath so that consumers who only need UI components are not forced to install @qwen-code/qwen-code-core. - Add src/followup.ts as separate Vite lib entry - Remove followup exports from src/index.ts - Add ./followup exports map in package.json - Mark @qwen-code/qwen-code-core as optional peerDependency - Switch build from single-entry UMD to multi-entry ESM/CJS * fix(webui): restore UMD build and isolate core from root type boundary - Restore UMD output for root entry (used by CDN demos, export-html, etc.) - Build followup subpath via separate vite.config.followup.ts to avoid Vite's multi-entry + UMD limitation - Replace FollowupState import in InputForm.tsx with a local structural type (InputFormFollowupState) so root .d.ts no longer references @qwen-code/qwen-code-core - Root entry (JS + UMD + .d.ts) is now fully free of core dependency; core is only required by '@qwen-code/webui/followup' subpath * refactor(followup): replace rule-based suggestions with LLM-based prompt suggestion Replace the hardcoded rule-based follow-up suggestion engine with an LLM-based prompt suggestion system, aligned with Claude Code's NES (Next-step Suggestion) architecture. Core changes: - Replace ruleBasedProvider with generatePromptSuggestion using BaseLlmClient.generateJson() - Port Claude Code's SUGGESTION_PROMPT and 14 filter rules (shouldFilterSuggestion) - Simplify state from multi-suggestion array to single string (FollowupState) - Add framework-agnostic controller with Object.freeze'd initial state Guard conditions (9 checks): - Settings toggle, non-interactive/SDK mode, plan mode - Permission/confirmation/loop-detection dialogs, elicitation requests - API error response detection, conversation history limit (slice -40) UI interaction (CLI + WebUI): - Tab: fill suggestion into input - Enter: accept and submit - Right Arrow: fill without submitting - Typing/paste: dismiss suggestion - Autocomplete conflict prevention Telemetry (PromptSuggestionEvent): - outcome (accepted/ignored/suppressed), accept_method (tab/enter/right) - time_to_accept_ms, time_to_ignore_ms, time_to_first_keystroke_ms - suggestion_length, similarity, was_focused_when_shown, prompt_id - Per-rule suppression logging with reason strings Deleted files: - ruleBasedProvider.ts/test, followupHistory.ts/test, types.ts (dead FollowupSuggestion type) 13 rounds of adversarial audit, 17 issues found and fixed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address qwen3.6-plus-preview review findings P0: Fix API error detection — check pendingGeminiHistoryItems for error items (API errors go to pending items, not historyManager.history). P1: Don't log abort as 'error' in telemetry — aborts are normal user behavior (user started typing), not errors. P3: Early return in dismiss() when state already cleared, avoiding redundant applyState call after accept(). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(settings): update suggestion feature description to match current behavior Remove outdated "arrow keys to cycle" text — the feature now uses Tab/Right Arrow to accept and Enter to accept+submit (no cycling). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): fix WebUI Enter submitting empty text + defend onOutcome P0/P1: WebUI Enter handler now passes suggestion text explicitly via onSubmit(e, followupSuggestion) instead of relying on React setState (which is async and would leave inputText as "" in the closure). P3: Wrap onOutcome callbacks in try/catch in both accept() and dismiss() so telemetry errors cannot block state transitions. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): allow setSuggestion(null) when disabled + fix dts clobber - setSuggestion(null) now always clears state/timers even when disabled, preventing stale suggestions from lingering after feature toggle. - Set insertTypesEntry: false in followup vite config to prevent overwriting the main build's index.d.ts. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(webui): thread explicitText through submit chain for Enter accept handleSubmit and handleSubmitWithScroll now accept an optional explicitText parameter. When provided (e.g., from prompt suggestion Enter accept), it is used instead of the closure-captured inputText, fixing the React setState race where onSubmit reads stale empty text. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address Copilot review — 4 fixes - Enter accept: use buffer.text.length === 0 instead of !trim() to prevent whitespace-only input from triggering suggestion accept - Move ref tracking from render body to useEffect to avoid render-time side effects in StrictMode/concurrent rendering - Align PromptSuggestionEvent event.name to 'qwen-code.prompt_suggestion' matching the EVENT_PROMPT_SUGGESTION constant used by the logger - Fix onOutcome JSDoc: remove mention of 'suppressed' (handled separately) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address Copilot review — curated history, type compat, peer version - Use curated history (getChat().getHistory(true)) to avoid invalid entries causing API 400 errors in suggestion generation - Use method signature for onSubmit in InputFormProps to maintain bivariant compatibility with existing consumers under strictFunctionTypes - Tighten @qwen-code/qwen-code-core peer dependency to >=0.13.1 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): add prompt cache sharing + speculation engine Phase 1 — Forked Query (cache sharing): - CacheSafeParams: snapshot of generationConfig (systemInstruction + tools) + curated history + model + version, saved after each successful main turn - createForkedChat: isolated GeminiChat sharing the same cache prefix for DashScope cache_control hit - runForkedQuery: single-turn request via forked chat with JSON schema support - suggestionGenerator: uses forked query when CacheSafeParams available, falls back to BaseLlmClient.generateJson otherwise - GeminiChat.getGenerationConfig(): new getter for cache param snapshots - Feature flag: enableCacheSharing (default: false) Phase 2 — Speculation (predictive execution): - OverlayFs: copy-on-write filesystem for speculation file isolation (/tmp/qwen-speculation/{pid}/{id}/), handles new files + existing files - speculationToolGate: tool boundary enforcement using AST-based shell checker (not deprecated regex), write tools gated by ApprovalMode (only auto-edit/yolo allow overlay writes) - speculation.ts: startSpeculation (on suggestion display), acceptSpeculation (on Tab/Enter — copies overlay to real FS, injects history via addHistory), abortSpeculation (on user input/new turn — cleanup overlay) - Custom execution loop: toolRegistry.getTool → tool.build → invocation.execute (bypasses CoreToolScheduler — permission handled by toolGate) - ensureToolResultPairing: strips unpaired functionCalls at boundary - Boundary-aware tool result preservation: keeps executed tool results even when boundary truncates remaining calls - Feature flag: enableSpeculation (default: false) Telemetry: - SpeculationEvent: outcome, turns_used, files_written, tool_use_count, duration_ms, boundary_type, had_pipelined_suggestion - logSpeculation logger function Security: - Write tools only allowed in auto-edit/yolo mode during speculation - Shell commands gated by isShellCommandReadOnlyAST (AST parser) - Unknown/MCP tools always hit boundary (safe default) - All structuredClone for cache param isolation 4 rounds of adversarial audit, 20+ issues found and fixed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address Copilot review — curated history, type compat, peer version - Move web_fetch/web_search from SAFE_READ_ONLY to BOUNDARY tools (they require user confirmation for network requests) - Add overlay read path resolution for read tools (resolveReadPaths) so speculative reads see overlay-written files - Wire enableCacheSharing setting into generatePromptSuggestion - Fix esbuild comment to not hardcode webui version Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(speculation): use index-based tracking for boundary tool pairing Track executed function calls by order (first N matching functionResponses.length) instead of by name. Fixes incorrect pairing when model emits multiple calls with the same tool name. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(speculation): handle undefined functionCall.name + wrap rewritePathArgs - Skip functionCall parts with missing name instead of non-null assertion - Wrap rewritePathArgs in try/catch — treat path rewrite failure as boundary Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): pipelined suggestion, UI rendering, dismiss abort - Pipelined suggestion: after speculation completes, generate next suggestion using augmented context. Promoted on accept. - UI rendering: completed speculation results rendered via historyManager. - Dismiss abort: typing/pasting calls dismissPromptSuggestion → clears promptSuggestion → useEffect aborts running speculation immediately. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): clear cache on reset, truncate history, fix test + comment - Clear CacheSafeParams on startChat/resetChat to prevent cross-session leakage - Truncate history to 40 entries before deep clone in saveCacheSafeParams to reduce CPU/memory overhead on long sessions - Update stale comment about speculation dismiss lifecycle - Add onAccept assertion to accept test with proper microtask flush Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(design): add prompt suggestion design documentation - prompt-suggestion-design.md: architecture, generation, filtering, state management, keyboard interaction, telemetry, feature flags - speculation-design.md: copy-on-write overlay, tool gate security, boundary handling, pipelined suggestion, forked query cache sharing - prompt-suggestion-implementation.md: implementation status, test coverage, audit history, Claude Code alignment tracking Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(overlay): align catch comment with silent behavior Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): wire augmented context into pipelined suggestion + guard Tab/Right - Pipelined suggestion now includes the accepted suggestion text and speculated model response as context for the next prediction - Tab/ArrowRight handlers only preventDefault when onAcceptFollowup is provided, preventing key interception without a wired callback Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(speculation): filter thought parts + add filePath to path keys - Skip thought/reasoning parts from model responses to prevent leaking internal reasoning into speculated history - Add 'filePath' to path rewrite key list for LSP and other tools that use camelCase argument names Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(overlay): resolve relative paths against realCwd not process.cwd Relative tool paths are now resolved against the overlay's realCwd before computing the relative path, preventing incorrect outside-cwd detection when process.cwd() differs from config.getCwd(). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(design): fix 4 doc-code inconsistencies - Guard conditions: clarify 13 code checks vs 11 table categories, separate feature flags from guard block, add streaming transition - Filter rules: 14 → 12 (actual count in code and table) - BOUNDARY_TOOLS: add todo_write + exit_plan_mode to doc table - SpeculationEvent: 8 → 7 fields (matching code) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): turns_used metric + reuse SUGGESTION_PROMPT + reduce clones - turns_used: count only model messages (not all Content entries) to accurately reflect LLM round-trips instead of inflated 3x count - Pipelined suggestion: reuse exported SUGGESTION_PROMPT from suggestionGenerator instead of a degraded local copy, ensuring consistent quality (EXAMPLES, NEVER SUGGEST rules included) - createForkedChat: replace redundant structuredClone with shallow copies since params are already deep-cloned snapshots Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): speculation UI tool rendering + speculationModel setting - Speculation UI: render tool calls as tool_group HistoryItems with structured name/description/result instead of plain text only - speculationModel setting: allows using a cheaper/faster model for speculation and pipelined suggestion. Leave empty to use main model. Passed through startSpeculation → runSpeculativeLoop → pipelined. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(design): sync docs with latest code changes - Add speculationModel setting to feature flags table - Document tool_group UI rendering in speculation accept flow - Fix createForkedChat: deep clone → shallow copy (already cloned snapshots) - Document pipelined suggestion SUGGESTION_PROMPT reuse - Add Model Override and UI Rendering sections to speculation-design - Update line counts to match actual file sizes Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test(followup): add unit tests for overlayFs, toolGate, forkedQuery overlayFs (15 tests): COW write, read resolution, apply, cleanup, path traversal speculationToolGate (24 tests): tool categories, approval mode gating, shell AST, path rewrite forkedQuery (6 tests): cache params save/get/clear, deep clone, version detection Total: 27 → 173 tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test(followup): P0-P2 test coverage for speculation + controller + toolGate speculation.test.ts (7 tests): - ensureToolResultPairing: empty, no calls, paired, unpaired text+call, unpaired call-only, user-ending, empty parts followupState.test.ts (+8 tests = 15 total): - onOutcome: accepted/tab, ignored/dismiss, error caught, no-op when cleared - clear(): resets accepting lock allowing re-accept - double accept blocked by debounce - setSuggestion replaces pending timer speculationToolGate.test.ts (+3 tests = 27 total): - resolveReadPaths: overlay path after write, unchanged when not written - rewritePathArgs: path key coverage Total: 173 → 190 tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test(followup): smoke tests + P0-P2 coverage gaps smoke.test.ts (21 tests): E2E verification across modules - Filter against realistic LLM outputs (9 good + 7 bad + reason check) - OverlayFs full round-trip (write → read → apply → verify) - ToolGate → OverlayFs integration (write redirect → read resolve) - CacheSafeParams lifecycle (save → mutate → isolation → clear) - ensureToolResultPairing orphaned functionCalls followupState.test.ts (+8 tests): - onOutcome: accepted/tab, ignored/dismiss, error caught, no-op cleared - clear(): resets accepting lock - double accept debounce - setSuggestion replaces pending timer speculationToolGate.test.ts (+3 tests): - resolveReadPaths through overlay after write - path key coverage for rewritePathArgs Export ensureToolResultPairing for testing. Total: 190 → 211 tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): dismiss aborts suggestion, boundary skip inject, parentSignal check - dismissPromptSuggestion now also aborts suggestionAbortRef to prevent race between dismiss and in-flight startSpeculation - Boundary speculation: skip acceptSpeculation (which injects history), fall through to normal addMessage to avoid duplicate user turns - startSpeculation: check parentSignal.aborted upfront before starting - Speculation rendering: use index-based loop instead of indexOf O(n²) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(design): fix speculation accept diagram — boundary skips inject The architecture diagram now shows the branching logic: completed speculations go through acceptSpeculation (inject + render), while boundary speculations are discarded and the query is submitted fresh via addMessage. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): enable cache sharing by default enableCacheSharing now defaults to true. This is a pure cost optimization with no behavioral change — suggestion generation uses the forked query path (sharing the main conversation's prompt cache prefix) when CacheSafeParams are available. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): aborted parent skips loop, acceptSpeculation try/finally, doc sync - startSpeculation: return aborted state immediately when parentSignal is already aborted, without creating overlay or starting loop - acceptSpeculation: wrap in try/finally to guarantee overlay cleanup even if applyToReal or addHistory throws - Doc: enableCacheSharing default false → true (matches code) - Doc: update test count table (7 → 15 followupState, add 6 new files) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): remove debug logs, add function calling fallback for non-FC models - Remove all followup-debug process.stderr.write logs - Add direct text fallback in generateViaBaseLlm when generateJson returns {} (model doesn't support function calling, e.g., glm-5.1) - Add CJK text support in filter: skip whitespace-based word count for Chinese/Japanese/Korean text, use character count instead Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): add suggestionModel setting for faster suggestion generation New setting `suggestionModel` allows using a smaller/faster model (e.g., qwen-turbo) for prompt suggestion generation instead of the main conversation model. Reduces suggestion latency significantly. Passed through: settings → AppContainer → generatePromptSuggestion → generateViaForkedQuery / generateViaBaseLlm (both paths). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): suggestionModel setting, /stats tracking, /about display - suggestionModel: new setting to use a faster model for suggestion generation (e.g., qwen3.5-flash instead of main model glm-5.1) - /stats: suggestion API calls now report usage to UiTelemetryService so token consumption appears in /stats model breakdown - /about: shows Suggestion Model field (configured or main model) Also: - Function calling fallback for non-FC models (direct text generation) - CJK text support in word count filter (character-based for Chinese) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * i18n: add Suggestion Model translations for /about display en: Suggestion Model | zh: 建议模型 | ja: 提案モデル de: Vorschlagsmodell | pt: Modelo de Sugestão | ru: Модель предложений Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): always use generateContent for suggestion (not generateJson) generateJson doesn't expose usageMetadata, so /stats can't track suggestion model tokens. Switch to direct generateContent which always returns usage data. Also simplifies the code by removing the function-calling + fallback dual path. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): fix /stats tracking — use ApiResponseEvent constructor Use ApiResponseEvent class constructor with proper response_id and override event.name to match UiEvent type for UiTelemetryService switch statement. This ensures suggestion model token usage appears in /stats model output. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * i18n: fix Chinese translation for Suggestion Model "建议模型" → "提示建议模型" to avoid ambiguity. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(followup): merge suggestionModel + speculationModel into fastModel Single unified setting for all background tasks: suggestion generation, speculation, pipelined suggestions, and future background tasks. Users only need to understand one concept: main model for conversation, fast model for background tasks. - Remove: suggestionModel, speculationModel - Add: fastModel (ui.fastModel in settings.json) - Update /about display: "Fast Model" with i18n translations - Update all 6 locale files (en/zh/ja/de/pt/ru) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(settings): move fastModel to top-level (parallel to model) fastModel is an independent model concept, not a property of the main model. Move from model.fastModel to top-level settings.fastModel. Config: { "fastModel": "qwen3.5-flash", "model": { "name": "glm-5.1" } } Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): report usage in both forkedQuery and baseLlm paths The forkedQuery path (used when enableCacheSharing=true) was not reporting token usage to UiTelemetryService, so /stats model didn't show the fast model. Now both paths report usage. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(cli): add /model --fast command to set fast model Usage: /model --fast qwen3.5-flash — set fast model /model --fast — show current fast model /model — open model selection dialog (unchanged) Saves to user settings (SettingScope.User). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(design): update to fastModel (replace suggestionModel/speculationModel) - prompt-suggestion-design.md: speculationModel → fastModel (top-level) - speculation-design.md: Model Override → Fast Model, update description - prompt-suggestion-implementation.md: update settings description Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(cli): /model --fast opens model selection dialog for fast model When called without a model name, /model --fast now opens the same model selection dialog used by /model, but selecting a model saves it as fastModel instead of switching the main model. - useModelCommand: add isFastModelMode state - ModelDialog: intercept selection in fast model mode, save to fastModel - DialogManager: pass isFastModelMode prop to ModelDialog - types.ts: add 'fast-model' dialog type Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): pass resolved model (not undefined) to runForkedQuery model: modelOverride → model: model (which has the fallback applied) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(cli): /model --fast defaults to current fast model in dialog When opening the model selection dialog via /model --fast, the currently configured fastModel is pre-selected instead of the main model. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(cli): add --fast tab completion for /model command /model <Tab> now shows --fast as a completion option with description. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(schema): regenerate settings.schema.json with new followup settings Adds enableCacheSharing, enableSpeculation, and fastModel to the generated JSON schema so CI validation passes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(test): update tests for new Fast Model field in system info Add "Fast Model" to expected labels in systemInfoFields and bugCommand tests to match the new field added to /about and bug report output. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * ci: trigger PR synchronize event Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address Copilot review comments (batch 4) - modelCommand: use getPersistScopeForModelSelection for fastModel, return meaningful info message instead of empty content - ModelDialog: handle $runtime|authType|modelId format in fast-model mode - forkedQuery: return structuredClone from getCacheSafeParams - client: fix stale comment about history truncation order - speculation: detect abort in .then() handler, set 'aborted' status and cleanup overlay to prevent leaks - docs: update test count table Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(users): add followup suggestions user manual - New feature page: followup-suggestions.md covering usage, keybindings, fast model configuration, settings, and quality filters - commands.md: add /model --fast command reference - settings.md: add enableFollowupSuggestions, enableCacheSharing, enableSpeculation, and fastModel settings documentation - _meta.ts: register new page in navigation Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(users): audit fixes for followup suggestions documentation - followup-suggestions.md: add 300ms delay, WebUI support, plan mode guard, non-interactive guard, slash commands as single-word, meta/error filters, character limit - settings.md: move fastModel next to model section, add /model --fast cross-reference and link to feature page - overview.md: add followup suggestions to feature list - i18n: add missing translations for 'Set fast model for background tasks' and 'Fast model updated.' in all 6 locales Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address Copilot review comments (batch 5) - modelCommand: remove duplicate info message (keep addItem only) - followup-suggestions.md: clarify WebUI requires host app wiring - speculation-design.md: fix abort telemetry description - i18n: add missing translations for fast model strings Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(cli): remove duplicate message in /model --fast command Use return message instead of addItem + empty return to avoid blank INFO line in history. Also handle missing settings service. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(i18n): remove unused 'Fast model updated.' translations The /model --fast command now returns the model name directly instead of using this string. Remove dead translations. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): disable thinking mode for suggestion and speculation Forked queries inherit the main conversation's generationConfig which may have thinkingConfig enabled. This wastes tokens and adds latency for background tasks that don't need reasoning. Explicitly set thinkingConfig.includeThoughts=false in both paths: - createForkedChat (covers forked query + speculation) - generateViaBaseLlm (non-cache-sharing fallback) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: document thinking mode auto-disable for background tasks - User docs: note that thinking is auto-disabled for suggestions/speculation - Design docs: detail thinkingConfig override in both forked query and BaseLlm paths, explain why cache hits are unaffected Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> Co-authored-by: jinjing.zzj <jinjing.zzj@alibaba-inc.com> Co-authored-by: yiliang114 <1204183885@qq.com>
1 parent e855229 commit 3bce84d

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

64 files changed

+4951
-41
lines changed
Lines changed: 211 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,211 @@
1+
# Prompt Suggestion (NES) Design
2+
3+
> Predicts what the user would naturally type next after the AI completes a response, showing it as ghost text in the input prompt.
4+
>
5+
> Implementation status: `prompt-suggestion-implementation.md`. Speculation engine: `speculation-design.md`.
6+
7+
## Overview
8+
9+
A **prompt suggestion** (Next-step Suggestion / NES) is a short prediction (2-12 words) of the user's next input, generated by an LLM call after each AI response. It appears as ghost text in the input prompt. The user can accept it with Tab/Enter/Right Arrow or dismiss it by typing.
10+
11+
## Architecture
12+
13+
```
14+
┌─────────────────────────────────────────────────────────────┐
15+
│ AppContainer (CLI) │
16+
│ │
17+
│ Responding → Idle transition │
18+
│ │ │
19+
│ ▼ │
20+
│ ┌─────────────────────────────────────────────────────┐ │
21+
│ │ Guard Conditions (11 categories) │ │
22+
│ │ settings, interactive, sdk, plan mode, dialogs, │ │
23+
│ │ elicitation, API error │ │
24+
│ └────────────────────┬────────────────────────────────┘ │
25+
│ │ │
26+
│ ▼ │
27+
│ ┌─────────────────────────────────────────────────────┐ │
28+
│ │ generatePromptSuggestion() │ │
29+
│ │ │ │
30+
│ │ ┌─── CacheSafeParams available? ───┐ │ │
31+
│ │ │ │ │ │
32+
│ │ ▼ YES NO ▼ │ │
33+
│ │ runForkedQuery() BaseLlmClient.generateJson() │ │
34+
│ │ (cache-aware) (standalone fallback) │ │
35+
│ │ │ │
36+
│ │ ──── SUGGESTION_PROMPT ──── │ │
37+
│ │ ──── 12 filter rules ────── │ │
38+
│ │ ──── getFilterReason() ──── │ │
39+
│ └────────────────────┬────────────────────────────────┘ │
40+
│ │ │
41+
│ ▼ │
42+
│ ┌─────────────────────────────────────────────────────┐ │
43+
│ │ FollowupController (framework-agnostic) │ │
44+
│ │ 300ms delay → show as ghost text │ │
45+
│ │ │ │
46+
│ │ Tab → accept (fill input) │ │
47+
│ │ Enter → accept + submit │ │
48+
│ │ Right → accept (fill input) │ │
49+
│ │ Type → dismiss + abort speculation │ │
50+
│ └─────────────────────────────────────────────────────┘ │
51+
│ │
52+
│ ┌─────────────────────────────────────────────────────┐ │
53+
│ │ Telemetry (PromptSuggestionEvent) │ │
54+
│ │ outcome, accept_method, timing, similarity, │ │
55+
│ │ keystroke, focus, suppression reason, prompt_id │ │
56+
│ └─────────────────────────────────────────────────────┘ │
57+
└─────────────────────────────────────────────────────────────┘
58+
```
59+
60+
## Suggestion Generation
61+
62+
### LLM Prompt
63+
64+
```
65+
[SUGGESTION MODE: Suggest what the user might naturally type next.]
66+
67+
Your job is to predict what THEY would type - not what you think they should do.
68+
THE TEST: Would they think "I was just about to type that"?
69+
70+
EXAMPLES:
71+
User asked "fix the bug and run tests", bug is fixed → "run the tests"
72+
After code written → "try it out"
73+
Task complete, obvious follow-up → "commit this" or "push it"
74+
75+
Format: 2-12 words, match the user's style. Or nothing.
76+
Reply with ONLY the suggestion, no quotes or explanation.
77+
```
78+
79+
### Filter Rules (12)
80+
81+
| Rule | Example blocked |
82+
| ------------------ | ------------------------------------------------ |
83+
| done | "done" |
84+
| meta_text | "nothing found", "no suggestion", "silence" |
85+
| meta_wrapped | "(silence)", "[no suggestion]" |
86+
| error_message | "api error: 500" |
87+
| prefixed_label | "Suggestion: commit" |
88+
| too_few_words | "hmm" (but allows "yes", "commit", "push" etc.) |
89+
| too_many_words | > 12 words |
90+
| too_long | >= 100 chars |
91+
| multiple_sentences | "Run tests. Then commit." |
92+
| has_formatting | newlines, markdown bold |
93+
| evaluative | "looks good", "thanks" (with \b word boundaries) |
94+
| ai_voice | "Let me...", "I'll...", "Here's..." |
95+
96+
### Guard Conditions
97+
98+
**AppContainer useEffect (13 checks in code):**
99+
100+
| Guard | Check |
101+
| -------------------- | --------------------------------------------------- |
102+
| Settings toggle | `enableFollowupSuggestions` |
103+
| Non-interactive | `config.isInteractive()` |
104+
| SDK mode | `!config.getSdkMode()` |
105+
| Streaming transition | `Responding → Idle` (2 checks) |
106+
| API error (history) | `historyManager.history[last]?.type !== 'error'` |
107+
| API error (pending) | `!pendingGeminiHistoryItems.some(type === 'error')` |
108+
| Confirmation dialogs | shell + general + loop detection (3 checks) |
109+
| Permission dialog | `isPermissionsDialogOpen` |
110+
| Elicitation | `settingInputRequests.length === 0` |
111+
| Plan mode | `ApprovalMode.PLAN` |
112+
113+
**Inside generatePromptSuggestion():**
114+
115+
| Guard | Check |
116+
| ------------------ | ---------------- |
117+
| Early conversation | `modelTurns < 2` |
118+
119+
**Separate feature flags (not in guard block):**
120+
121+
| Flag | Controls |
122+
| -------------------- | ------------------------------------------------------- |
123+
| `enableCacheSharing` | Whether to use forked query or fallback to generateJson |
124+
| `enableSpeculation` | Whether to start speculation on suggestion display |
125+
126+
## State Management
127+
128+
### FollowupState
129+
130+
```typescript
131+
interface FollowupState {
132+
suggestion: string | null;
133+
isVisible: boolean;
134+
shownAt: number; // timestamp for telemetry
135+
}
136+
```
137+
138+
### FollowupController
139+
140+
Framework-agnostic controller shared by CLI (Ink) and WebUI (React):
141+
142+
- `setSuggestion(text)` — 300ms delayed show, null clears immediately
143+
- `accept(method)` — clears state, fires `onAccept` via microtask, 100ms debounce lock
144+
- `dismiss()` — clears state, logs `ignored` telemetry
145+
- `clear()` — hard reset all state + timers
146+
- `Object.freeze(INITIAL_FOLLOWUP_STATE)` prevents accidental mutation
147+
148+
## Keyboard Interaction
149+
150+
| Key | CLI | WebUI |
151+
| ----------- | --------------------------- | ------------------------------------ |
152+
| Tab | Fill input (no submit) | Fill input (no submit) |
153+
| Enter | Fill + submit | Fill + submit (`explicitText` param) |
154+
| Right Arrow | Fill input (no submit) | Fill input (no submit) |
155+
| Typing | Dismiss + abort speculation | Dismiss |
156+
| Paste | Dismiss + abort speculation | Dismiss |
157+
158+
### Key Binding Note
159+
160+
The Tab handler uses `key.name === 'tab'` explicitly (not `ACCEPT_SUGGESTION` matcher) because `ACCEPT_SUGGESTION` also matches Enter, which must fall through to the SUBMIT handler.
161+
162+
## Telemetry
163+
164+
### PromptSuggestionEvent
165+
166+
| Field | Type | Description |
167+
| -------------------------- | --------------------------- | ----------------------------------- |
168+
| outcome | accepted/ignored/suppressed | Final outcome |
169+
| prompt_id | string | Default: 'user_intent' |
170+
| accept_method | tab/enter/right | How user accepted |
171+
| time_to_accept_ms | number | Time from shown to accept |
172+
| time_to_ignore_ms | number | Time from shown to dismiss |
173+
| time_to_first_keystroke_ms | number | Time to first keystroke while shown |
174+
| suggestion_length | number | Character count |
175+
| similarity | number | 1.0 for accept, 0.0 for ignore |
176+
| was_focused_when_shown | boolean | Terminal had focus |
177+
| reason | string | For suppressed: filter rule name |
178+
179+
### SpeculationEvent
180+
181+
| Field | Type | Description |
182+
| ------------------------ | ----------------------- | ------------------------- |
183+
| outcome | accepted/aborted/failed | Speculation result |
184+
| turns_used | number | API round-trips |
185+
| files_written | number | Files in overlay |
186+
| tool_use_count | number | Tools executed |
187+
| duration_ms | number | Wall-clock time |
188+
| boundary_type | string | What stopped speculation |
189+
| had_pipelined_suggestion | boolean | Next suggestion generated |
190+
191+
## Feature Flags and Settings
192+
193+
| Setting | Type | Default | Description |
194+
| --------------------------- | ------- | ------- | -------------------------------------------------------------------------------- |
195+
| `enableFollowupSuggestions` | boolean | true | Master toggle for prompt suggestions |
196+
| `enableCacheSharing` | boolean | true | Use cache-aware forked queries |
197+
| `enableSpeculation` | boolean | false | Predictive execution engine |
198+
| `fastModel` (top-level) | string | "" | Model for all background tasks (empty = use main model). Set via `/model --fast` |
199+
200+
### Thinking Mode
201+
202+
Thinking/reasoning is explicitly disabled (`thinkingConfig: { includeThoughts: false }`) for all background task paths:
203+
204+
- **Forked query path** (`createForkedChat`) — overrides `thinkingConfig` in the cloned `generationConfig`, covering both suggestion generation and speculation
205+
- **BaseLlm fallback path** (`generateViaBaseLlm`) — per-request config overrides base content generator's thinking settings
206+
207+
This is safe because:
208+
209+
- Cache prefix is determined by systemInstruction + tools + history, not `thinkingConfig` — cache hits are unaffected
210+
- All backends (Gemini, OpenAI-compatible, Anthropic) handle `includeThoughts: false` by omitting the thinking field — no API errors on models without thinking support
211+
- Suggestion generation and speculation don't benefit from reasoning tokens
Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
# Prompt Suggestion Implementation Status
2+
3+
> Tracks the implementation status of the prompt suggestion (NES) feature across all packages.
4+
5+
## Core Module (`packages/core/src/followup/`)
6+
7+
| Component | Status | Lines | Description |
8+
| ------------------------ | ------- | ----- | ------------------------------------------------------------- |
9+
| `followupState.ts` | ✅ Done | ~230 | Framework-agnostic controller with timer/debounce |
10+
| `suggestionGenerator.ts` | ✅ Done | ~260 | LLM generation + 12 filter rules + forked query support |
11+
| `forkedQuery.ts` | ✅ Done | ~240 | CacheSafeParams + createForkedChat + runForkedQuery |
12+
| `overlayFs.ts` | ✅ Done | ~140 | Copy-on-write overlay filesystem |
13+
| `speculationToolGate.ts` | ✅ Done | ~150 | Tool boundary enforcement with AST shell parser |
14+
| `speculation.ts` | ✅ Done | ~540 | Speculation engine with pipelined suggestion + model override |
15+
16+
## CLI Integration (`packages/cli/`)
17+
18+
| Component | Status | Description |
19+
| ---------------------------- | ------- | ---------------------------------------------------------- |
20+
| `AppContainer.tsx` | ✅ Done | Suggestion generation, speculation lifecycle, UI rendering |
21+
| `InputPrompt.tsx` | ✅ Done | Tab/Enter/Right Arrow acceptance, dismiss + abort |
22+
| `Composer.tsx` | ✅ Done | Props threading |
23+
| `UIStateContext.tsx` | ✅ Done | promptSuggestion + dismissPromptSuggestion |
24+
| `useFollowupSuggestions.tsx` | ✅ Done | React hook with telemetry + keystroke tracking |
25+
| `settingsSchema.ts` | ✅ Done | 3 feature flags + fastModel setting |
26+
| `settings.schema.json` | ✅ Done | VSCode settings schema |
27+
28+
## WebUI Integration (`packages/webui/`)
29+
30+
| Component | Status | Description |
31+
| --------------------------- | ------- | ------------------------------------------- |
32+
| `InputForm.tsx` | ✅ Done | Tab/Enter/Right Arrow + explicitText submit |
33+
| `useFollowupSuggestions.ts` | ✅ Done | React hook with onOutcome support |
34+
| `followup.ts` | ✅ Done | Subpath entry |
35+
| `components.css` | ✅ Done | Ghost text styling |
36+
| `vite.config.followup.ts` | ✅ Done | Separate build config |
37+
38+
## Telemetry (`packages/core/src/telemetry/`)
39+
40+
| Component | Status | Description |
41+
| ----------------------- | ------- | -------------------- |
42+
| `PromptSuggestionEvent` | ✅ Done | 10 fields |
43+
| `SpeculationEvent` | ✅ Done | 7 fields |
44+
| `logPromptSuggestion()` | ✅ Done | OpenTelemetry logger |
45+
| `logSpeculation()` | ✅ Done | OpenTelemetry logger |
46+
47+
## Test Coverage
48+
49+
| Test File | Tests | Description |
50+
| ----------------------------- | ----- | --------------------------------------------------------------- |
51+
| `followupState.test.ts` | 14 | Controller timer, debounce, accept callback, onOutcome, clear |
52+
| `suggestionGenerator.test.ts` | 16 | All 12 filter rules + edge cases + false positives |
53+
| `overlayFs.test.ts` | 15 | COW write, read resolution, apply, cleanup, path traversal |
54+
| `speculationToolGate.test.ts` | 27 | Tool categories, approval mode, shell AST, path rewrite |
55+
| `forkedQuery.test.ts` | 6 | Cache params save/get/clear, deep clone, version detection |
56+
| `speculation.test.ts` | 7 | ensureToolResultPairing edge cases |
57+
| `smoke.test.ts` | 21 | Cross-module E2E: filter + overlay + toolGate + cache + pairing |
58+
| `InputPrompt.test.tsx` | 4 | Tab, Enter+submit, Right Arrow, completion guard |
59+
60+
## Audit History
61+
62+
| Round | Issues Found | Issues Fixed |
63+
| --------------- | ------------ | -------------------------------------------------------- |
64+
| R1-R4 | 10 | 10 (rule engine → LLM, state simplification) |
65+
| R5-R6 | 2 | 2 (Enter keybinding conflict, Right Arrow telemetry) |
66+
| R7-R8 | 3 | 3 (WebUI telemetry, dead type, test coverage) |
67+
| R9 | 0 | — (convergence) |
68+
| R10-R11 | 1 | 1 (historyManager dep) |
69+
| R12-R13 | 1 | 1 (evaluative regex word boundaries) |
70+
| Phase 1+2 R1-R4 | 20+ | 20+ (permission bypass, overlay safety, race conditions) |
71+
| **Total** | **37+** | **37+** |
72+
73+
## Claude Code Alignment
74+
75+
| Feature | Alignment | Notes |
76+
| -------------------------------- | --------- | ------------------------------------- |
77+
| Prompt text | 100% | Identical (brand name only) |
78+
| 12 filter rules | 100%+ | \b word boundaries improvement |
79+
| UI interaction (Tab/Enter/Right) | 100% | |
80+
| Guard conditions | 100% | 13 checks |
81+
| Telemetry | 100% | 10+7 fields |
82+
| Cache sharing || DashScope cache_control |
83+
| Speculation || COW overlay + tool gating |
84+
| Pipelined suggestion || Generated after speculation completes |
85+
| State management | 100%+ | Controller pattern, Object.freeze |

0 commit comments

Comments
 (0)