diff --git a/.claude-plugin/marketplace.json b/.claude-plugin/marketplace.json index 1c1c0d6..9affa96 100644 --- a/.claude-plugin/marketplace.json +++ b/.claude-plugin/marketplace.json @@ -6,7 +6,7 @@ }, "metadata": { "description": "Professional AI coding configurations, agents, skills, and context for Claude Code and Cursor", - "version": "9.2.0", + "version": "9.3.0", "license": "MIT", "repository": "https://github.com/TechNickAI/ai-coding-config" }, @@ -15,7 +15,7 @@ "name": "ai-coding-config", "source": "./plugins/core", "description": "Commands, agents, skills, and context for AI-assisted development workflows", - "version": "8.6.0", + "version": "8.7.0", "tags": ["commands", "agents", "skills", "workflows", "essential"] } ] diff --git a/plugins/core/commands/address-pr-comments.md b/plugins/core/commands/address-pr-comments.md index 9efb808..4c83b68 100644 --- a/plugins/core/commands/address-pr-comments.md +++ b/plugins/core/commands/address-pr-comments.md @@ -2,7 +2,7 @@ description: Triage and address PR comments from code review bots intelligently argument-hint: [pr-number] model: sonnet -version: 1.5.0 +version: 1.6.0 --- # Address PR Comments @@ -17,8 +17,7 @@ articulate why the bot is mistaken given context it lacks. Read @rules/code-review-standards.mdc for patterns where bot suggestions typically don't apply. Use these to identify incorrect suggestions - explain why the bot is wrong in -this specific case, not just that the issue is "minor" or "not blocking." - +this specific case, not just that the issue is "minor" or "not blocking." /address-pr-comments - Auto-detect PR from current branch @@ -29,6 +28,26 @@ this specific case, not just that the issue is "minor" or "not blocking." Use provided PR number, or detect from current branch. Exit if no PR exists. + +Match thoroughness to the PR's actual complexity, not line count. A 500-line generated +migration is trivial. A 20-line auth change needs careful attention. + +Assess complexity by: + +- Conceptual scope: Single focused change vs. multiple interrelated concerns +- Risk/blast radius: Does it touch auth, payments, data migrations, core abstractions? +- Novelty: Well-trodden patterns vs. new architectural territory +- Cross-cutting impact: Isolated change vs. affects multiple systems + +Simple changes (rename, config tweak, obvious bug fix): Process comments quickly, skip +productive-waiting, get to completion fast. + +Complex changes (new patterns, security-sensitive, architectural): Take time to +understand context, explore documentation impacts, consider creating follow-up issues. + +The goal is always getting the PR merged. Don't let thoroughness become an excuse for +delay on straightforward changes. + Check if the PR has merge conflicts with its base branch before processing comments. Conflicts block merging and should be resolved first. @@ -37,13 +56,13 @@ Detect conflicts via `gh pr view {number} --json mergeable,mergeStateStatus`. If mergeable is false, fetch the base branch and rebase or merge to resolve conflicts. When resolving conflicts: + - Preserve the intent of both sets of changes - Keep bot comments in context - some may become obsolete after conflict resolution - Push the resolved changes before continuing with comment review -If conflicts involve complex decisions (architectural changes, competing features), -flag for user attention rather than auto-resolving. - +If conflicts involve complex decisions (architectural changes, competing features), flag +for user attention rather than auto-resolving. If the branch name starts with `hotfix/`, switch to expedited review mode: @@ -51,51 +70,96 @@ If the branch name starts with `hotfix/`, switch to expedited review mode: - Focus on security vulnerabilities and bugs that could break production - Speed and correctness take priority over polish - One pass through comments, then push fixes immediately -- Style and refactoring suggestions get declined with "hotfix - addressing critical issues only" +- Style and refactoring suggestions get declined with "hotfix - addressing critical + issues only" Announce hotfix mode at start, explaining that this is an expedited review focusing on -security and correctness. - +security and correctness. Code review bots comment at different API levels. Fetch from both endpoints: -PR-level comments (issues endpoint): -`gh api repos/{owner}/{repo}/issues/{pr}/comments` +PR-level comments (issues endpoint): `gh api repos/{owner}/{repo}/issues/{pr}/comments` Claude Code Review posts here. Username is `claude[bot]`. Only address the most recent Claude review - older ones reflect outdated code state. Line-level review comments (pulls endpoint): -`gh api repos/{owner}/{repo}/pulls/{pr}/comments` -Cursor Bugbot posts here as inline comments on specific code lines. Username is -`cursor[bot]`. Address all Cursor comments - each flags a distinct location. +`gh api repos/{owner}/{repo}/pulls/{pr}/comments` Multiple bots post inline comments on +specific code lines. Address all line-level bot comments - each flags a distinct +location. + +Supported bots and their usernames: + +- `claude[bot]` - Claude Code Review (PR-level) +- `cursor[bot]` - Cursor Bugbot (line-level) +- `chatgpt-codex-connector[bot]` - OpenAI Codex (line-level) +- `greptile[bot]` - Greptile (line-level or PR-level) + +New bots may appear - any username containing `[bot]` that posts code review comments +should be processed. Check the comment body structure to determine if it's a code +review. You can also use: + - `gh pr view {number} --comments` for PR-level comments - `gh api repos/{owner}/{repo}/pulls/{pr}/reviews` for review status -Identify bot comments by author username. Human comments require different handling - -flag them for user attention rather than auto-addressing. +Identify bot comments by author username containing `[bot]`. Human comments require +different handling - flag them for user attention rather than auto-addressing. + Process bot feedback incrementally as each bot completes. When one bot finishes, address its comments immediately while others are still running. Claude Code Review typically -completes in 1-2 minutes. Cursor Bugbot takes 3-10 minutes. Process fast bots first -rather than waiting for slow ones. +completes in 1-2 minutes. Cursor Bugbot and Codex take 3-10 minutes. Greptile can take +up to 15 minutes. Process fast bots first rather than waiting for slow ones. Poll check status with `gh pr checks --json name,state,bucket`. Review bots include -claude-review, Cursor Bugbot, and greptile. +claude-review, Cursor Bugbot, chatgpt-codex-connector, and greptile. + +Maximize async throughput: while waiting for slow bots, work on other productive tasks +in parallel. Only block when you need bot results to continue. See productive-waiting +for ideas. -When bots are still pending, sleep adaptively based on which bots remain. If only Claude -is pending, sleep 30-60 seconds. If Cursor Bugbot is pending, sleep 2-3 minutes. Check -status after each sleep and process any newly-completed bots before sleeping again. +Poll bot status every 60-90 seconds while waiting. Check between productive-waiting +activities rather than sleeping idle. If all remaining bots are slow (Greptile, Codex), +extend to 2-3 minute intervals to reduce API calls. After pushing fixes, re-check for merge conflicts (the base branch may have advanced while you were working) and return to polling since bots will re-analyze. Exit when all -review bots have completed and no new actionable feedback remains. +review bots have completed and no new actionable feedback remains. -Use polling with adaptive sleep intervals to check bot status rather than watch mode. - + +Don't just sleep while waiting for bots. Use wait time productively: + +Codebase exploration: + +- Check if the PR changes affect documentation elsewhere (README, API docs, comments + that reference changed behavior). If updates are needed, offer to make them. +- Look for interesting patterns or clever solutions in the changed code worth noting +- Find fun facts about the codebase relevant to the PR + +Product thinking (channel your inner AI product manager): + +- Brainstorm product ideas inspired by the code you're seeing +- Spot opportunities the PR enables ("Now that we have this notification system, we + could build...") +- Notice UX improvements or feature extensions worth considering +- Think about what users might want next given this new capability + +Follow-up tracking: + +- Draft GitHub issues for follow-up work discovered during review +- Note technical debt or refactoring opportunities + +Share interesting discoveries - "While waiting for Greptile, I noticed this PR removes +the last usage of the old auth pattern. Want me to create an issue to clean up the +deprecated code?" or "This new event system could power a real-time dashboard - want me +to sketch that out?" + +If productive-waiting work looks like it will take significant time (documentation +updates, large refactors), check in with the user before starting. The goal is getting +the PR merged, not scope creep. While working through the phases, share interesting findings: @@ -105,8 +169,7 @@ While working through the phases, share interesting findings: - "Claude wants magic string extraction for a one-time value. Thumbs down, declining." - "SQL injection risk in search query - security issue, rocket reaction and addressing." -Keep narration brief and informative. - +Keep narration brief and informative. For each bot comment, ask: "Is this suggestion correct given context I have?" @@ -115,7 +178,8 @@ Address the suggestion when the bot's analysis is correct given full context. Th includes bugs, security issues, logic errors, and genuine improvements. When a bot correctly identifies an issue but suggests a suboptimal fix, address the -underlying issue with the appropriate solution. Credit the bot for the correct diagnosis. +underlying issue with the appropriate solution. Credit the bot for the correct +diagnosis. Decline with explanation when you can articulate why the bot is wrong: @@ -127,8 +191,7 @@ Decline with explanation when you can articulate why the bot is wrong: Valid declines explain why the bot's analysis is incorrect, not why addressing the issue is inconvenient. If the feedback would improve the code, address it. -Show triage summary with your reasoning, then proceed autonomously. - +Show triage summary with your reasoning, then proceed autonomously. Responding to bot comments serves two purposes: record-keeping and training. Bots learn @@ -150,25 +213,30 @@ Add reactions via API: Response methods differ by comment type: -PR-level comments (Claude bot): -These live in the issues endpoint. Reply with a new comment on the PR. Group responses -logically - one comment addressing multiple points is fine. +PR-level comments (issues endpoint): Reply with a new comment on the PR. Group responses +logically - one comment addressing multiple points is fine. Claude bot posts here. -Line-level comments (Cursor bot): -These support threaded replies. Reply directly to the comment thread: +Line-level comments (pulls endpoint): These support threaded replies. Reply directly to +the comment thread: `gh api repos/{owner}/{repo}/pulls/{pr}/comments/{comment_id}/replies -f body="..."` This keeps the conversation in context. The reply appears under the original comment, -making it easy for anyone reviewing to see the resolution inline. +making it easy for anyone reviewing to see the resolution inline. Cursor, Codex, and +Greptile bots post here. -For each comment: +For each bot comment, regardless of which bot posted it: -1. Add appropriate reaction first (training signal) +1. Add appropriate reaction (training signal) - this is always required 2. Make the fix if addressing, commit the change -3. Reply acknowledging the change or explaining the decline +3. Reply only when it adds value + +Reactions are often sufficient on their own. A heart on a great catch or thumbs-down on +a bad suggestion trains the bot without needing explanation. Reply when: -For declined items, reply with a brief, professional explanation referencing project -standards. The thumbs-down reaction signals disagreement; the reply explains why. - +- Declining and the reason isn't obvious from context +- The fix differs from what the bot suggested +- You want to credit a particularly good catch + +Keep replies brief. The reaction is the primary signal. Human reviewer comments get flagged for user attention, not auto-handled. Present @@ -179,9 +247,15 @@ separately from bot comments. When all review bot checks have completed and no new actionable feedback remains: Display prominently: + - PR URL (most important - user may have multiple sessions) - PR title - Summary of what was addressed vs declined +- Links to any follow-up GitHub issues created during the review +- Any human comments that still need attention + +If you created GitHub issues for follow-up work, list them with brief descriptions so +the user can prioritize them. Celebrate that the PR is ready to merge. A well-triaged PR is a beautiful thing.