-
Notifications
You must be signed in to change notification settings - Fork 1
✨ Improve bot detection in address-pr-comments #30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
Code ReviewI've reviewed PR #30 and the changes look excellent! This is a well-crafted improvement that follows the repository's prompt engineering standards. ✅ Strengths1. Pattern Reinforcement (prompt-engineering.mdc compliance)
2. Goals Over Process
3. Positive Framing
4. Terminology Consistency
5. Addresses Real Issue
📋 Code Quality
🎯 Alignment with Project StandardsThe changes perfectly follow the prompt engineering rules documented in
🔒 Security & Performance
🧪 Test PlanThe test plan in the PR description is comprehensive and appropriate. Suggested verification steps are practical and would effectively validate the changes. 💡 Minor Suggestion (Optional)Consider adding a comment in the bash examples explaining why we use # Use endswith to match bot username format (e.g., "claude[bot]", "cursor[bot]")
# This avoids false positives from usernames containing "[bot]" substring
gh api repos/{owner}/{repo}/issues/{pr}/comments --jq '.[] | select(.user.login | endswith("[bot]"))'However, this is truly optional - the current version is already clear. ✨ VerdictApproved! This is a high-quality improvement that demonstrates deep understanding of both the technical requirements and the prompt engineering principles that govern this codebase. The changes are minimal, focused, and directly address the identified issue. The PR is ready to merge as-is. 🚀 |
- Add explicit jq patterns for filtering bot comments (endswith("[bot]"))
- Clarify that some bots only post comments without GitHub checks
- Remove over-prescriptive polling intervals
- Align terminology: "ending with" to match jq pattern
- Positive framing for productive-waiting section
Prevents Claude from filtering by specific bot names instead of using
the pattern match, which caused missed comments from bots like
chatgpt-codex-connector.
Co-Authored-By: Claude Opus 4.5 <[email protected]>
e9be2eb to
862613a
Compare
Code Review - PR #30: Improve bot detection in address-pr-commentsSummaryThis PR makes excellent improvements to the Strengths1. Executable patterns over prose (lines 92-100) gh api repos/{owner}/{repo}/issues/{pr}/comments --jq '.[] | select(.user.login | endswith("[bot]"))'This follows the "Pattern Reinforcement Through Examples" principle - giving Claude exact code to copy instead of requiring interpretation. The root cause analysis in the PR description confirms this was causing Claude to write incorrect filters. 2. Alignment with prompt-engineering.mdc principles
3. Improved bot coverage (lines 128-131) Code QualityVersion bumps are correct:
All follow semantic versioning appropriately (patch-level changes for bug fixes). Consistency:
Potential Concerns1. Documentation completeness Suggestion: Add a brief comment in the code block: # PR-level (issues endpoint) - Claude Code Review posts here
gh api repos/{owner}/{repo}/issues/{pr}/comments --jq '.[] | select(.user.login | endswith("[bot]"))'
# Line-level (pulls endpoint) - Cursor, Codex, Greptile post here
gh api repos/{owner}/{repo}/pulls/{pr}/comments --jq '.[] | select(.user.login | endswith("[bot]"))'2. Pattern consistency in examples 3. Removed polling guidance (line 137) Poll periodically between productive-waiting activities rather than sleeping idle. Balance responsiveness with avoiding excessive API calls.Security & Performance✅ No security concerns - read-only API operations Test CoverageThe PR description includes a test plan, but I don't see automated tests. Consider adding:
These could live in a test suite or as documented manual test cases. Alignment with Project StandardsFrom CLAUDE.md:
From git-interaction.mdc:
RecommendationsMinor improvements:
Follow-up work:
VerdictApprove with minor suggestions. This is a well-thought-out fix that addresses the root cause and aligns excellently with the project's prompt engineering principles. The changes are focused, well-documented, and improve the reliability of the command. The explicit jq patterns are exactly the right solution - they teach by example rather than requiring interpretation, which is the core principle of good LLM-to-LLM communication. 🤖 Review generated by Claude Code |
Clarifies which bots post at which API level: - PR-level: Claude Code Review - Line-level: Cursor, Codex, Greptile Addresses Claude bot review feedback. Co-Authored-By: Claude Opus 4.5 <[email protected]>
Code ReviewThis PR demonstrates excellent application of the prompt engineering principles from ✅ Strengths1. Copy-paste patterns over prose interpretation The addition of explicit jq patterns is spot-on: # PR-level (issues endpoint) - Claude Code Review posts here
gh api repos/{owner}/{repo}/issues/{pr}/comments --jq '.[] | select(.user.login | endswith("[bot]"))'
# Line-level (pulls endpoint) - Cursor, Codex, Greptile post here
gh api repos/{owner}/{repo}/pulls/{pr}/comments --jq '.[] | select(.user.login | endswith("[bot]"))'This perfectly implements the "examples teach patterns" principle - instead of describing what to do, you show exactly what to copy. The executing LLM will use this verbatim rather than constructing its own interpretation. 2. Terminology alignment Changing "any username containing `[bot]`" to "any username ending with `[bot]`" aligns the prose with the actual pattern ( 3. Positive framing "Use wait time productively" vs. "Don't just sleep" is a textbook application of the positive framing principle. The executing LLM gets a clear directive rather than having to construct and negate a pattern. 4. Goals over process Removing specific polling intervals ("60-90 seconds", "2-3 minute intervals") trusts the executing model to determine appropriate timing. This aligns with the "assume the executing model is smarter" principle - future models can optimize polling based on actual bot response patterns. 5. Critical clarification on bot types The addition that "some bots (chatgpt-codex-connector) only post comments without GitHub checks" prevents premature exit from the workflow. This is important defensive documentation. 🎯 Suggested Improvements1. Pattern consistency in comments The inline comments in the jq examples are helpful, but they introduce slight inconsistency: # PR-level (issues endpoint) - Claude Code Review posts here
# Line-level (pulls endpoint) - Cursor, Codex, Greptile post hereConsider making these parallel in structure: # PR-level (issues endpoint): Claude Code Review
# Line-level (pulls endpoint): Cursor, Codex, GreptileThe parallel structure reinforces the pattern more clearly. 2. Version bump alignment The marketplace.json shows a patch version bump (9.4.0 → 9.4.1, 8.8.0 → 8.8.1). Given that this fixes a functional bug (incorrect jq patterns being generated), this is appropriate. However, you might consider documenting the versioning convention - is a patch bump appropriate for "fixes LLM interpretation bugs" vs. "fixes code bugs"? 3. Test plan checkbox The PR description includes an unchecked test plan: This is great, but it would be valuable to know if these tests were executed. If they were, checking the boxes would demonstrate validation. If not, that's worth flagging. 📝 Minor ObservationsFile organization: The changes are isolated to the right files - command documentation and version numbers only. No scope creep. Commit message: "✨ Improve bot detection in address-pr-comments" - clear, follows repo conventions with emoji prefix. PR description: Excellent use of the "Root cause" section explaining why the change was needed. This creates valuable documentation for future maintainers. 🔒 No Security ConcernsThe jq patterns use ⚡ PerformanceThe change from potentially incorrect filtering (missing bots) to correct filtering (catching all bots) improves reliability without performance cost. The jq filtering is efficient. 🧪 Test CoverageNo automated tests are present for this command file, which is reasonable given it's LLM-executable documentation rather than code. The manual test plan in the PR description is appropriate for this type of change. SummaryThis is a high-quality PR that demonstrates deep understanding of LLM-to-LLM communication patterns. The changes directly address the root cause (LLMs reproducing patterns from examples rather than interpreting prose) with the correct solution (providing exact patterns to copy). Recommendation: Approve and merge after addressing the minor test plan checkbox question. The changes align perfectly with Review generated with Claude Sonnet 4.5 following |
Summary
endswith("[bot]")- gives Claude the exact query to copy instead of interpreting prose[bot]" to match the jq patternRoot cause
Despite documentation saying "any username containing
[bot]", Claude was writingselect(.user.login == "cursor[bot]")instead of a pattern match. The fix is providing copy-paste patterns rather than relying on interpretation.Test plan
/address-pr-commentson a PR with multiple bot types (Claude, Cursor, Codex)🤖 Generated with Claude Code