Skip to content

Conversation

@TechNickAI
Copy link
Owner

Summary

  • Add explicit jq patterns for filtering bot comments using endswith("[bot]") - gives Claude the exact query to copy instead of interpreting prose
  • Clarify that some bots (chatgpt-codex-connector) only post comments without GitHub checks - prevents premature exit
  • Remove over-prescriptive polling intervals (60-90 seconds, etc.) - trust the executing LLM
  • Align terminology: "ending with [bot]" to match the jq pattern
  • Positive framing: "Use wait time productively" instead of "Don't just sleep"

Root cause

Despite documentation saying "any username containing [bot]", Claude was writing select(.user.login == "cursor[bot]") instead of a pattern match. The fix is providing copy-paste patterns rather than relying on interpretation.

Test plan

  • Run /address-pr-comments on a PR with multiple bot types (Claude, Cursor, Codex)
  • Verify all bot comments are fetched regardless of specific bot name
  • Confirm the jq pattern is used as-is from the documentation

🤖 Generated with Claude Code

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

@claude
Copy link

claude bot commented Jan 10, 2026

Code Review

I've reviewed PR #30 and the changes look excellent! This is a well-crafted improvement that follows the repository's prompt engineering standards.

✅ Strengths

1. Pattern Reinforcement (prompt-engineering.mdc compliance)

  • Provides explicit, copy-pasteable jq patterns instead of prose descriptions
  • This aligns perfectly with the "Pattern Reinforcement Through Examples" principle - showing the exact code to use rather than describing it
  • The jq examples are consistent and demonstrate the pattern clearly

2. Goals Over Process

  • Removes over-prescriptive polling intervals (60-90 seconds, etc.)
  • Trusts the executing LLM to determine appropriate polling frequency
  • This follows the "Assume the executing model is smarter" principle from the prompt engineering guidelines

3. Positive Framing

  • Changes "Don't just sleep" to "Use wait time productively"
  • This aligns with the documented principle: "Positive framing: 'Write in flowing prose' is clearer than 'Don't use markdown'"

4. Terminology Consistency

  • Updates "containing [bot]" to "ending with [bot]" to match the actual jq pattern
  • Maintains consistency throughout the document

5. Addresses Real Issue

  • The PR summary clearly explains the root cause: Claude was writing select(.user.login == "cursor[bot]") instead of using pattern matching
  • The fix directly solves this by providing copy-paste patterns

📋 Code Quality

  • Version bump: Appropriately incremented from 1.7.1 to 1.7.2
  • Documentation clarity: The added bash code blocks make the intent crystal clear
  • Completeness: Covers both PR-level and line-level comment endpoints
  • Edge case handling: Explicitly notes that some bots (chatgpt-codex-connector) only post comments without checks

🎯 Alignment with Project Standards

The changes perfectly follow the prompt engineering rules documented in rules/prompt-engineering.mdc:

  1. Pattern teaching through examples - Shows exact jq patterns to copy
  2. Goals over process - Removes prescriptive timing intervals
  3. Positive framing - Reframes negative instruction positively
  4. Consistency - Aligns terminology with actual implementation

🔒 Security & Performance

  • No security concerns: The jq patterns are safe and appropriate
  • Performance improvement: Removing rigid polling intervals allows the LLM to optimize based on actual conditions
  • Robustness: The endswith("[bot]") pattern is more flexible for future bot additions

🧪 Test Plan

The test plan in the PR description is comprehensive and appropriate. Suggested verification steps are practical and would effectively validate the changes.

💡 Minor Suggestion (Optional)

Consider adding a comment in the bash examples explaining why we use endswith rather than contains:

# Use endswith to match bot username format (e.g., "claude[bot]", "cursor[bot]")
# This avoids false positives from usernames containing "[bot]" substring
gh api repos/{owner}/{repo}/issues/{pr}/comments --jq '.[] | select(.user.login | endswith("[bot]"))'

However, this is truly optional - the current version is already clear.

✨ Verdict

Approved! This is a high-quality improvement that demonstrates deep understanding of both the technical requirements and the prompt engineering principles that govern this codebase. The changes are minimal, focused, and directly address the identified issue.

The PR is ready to merge as-is. 🚀

- Add explicit jq patterns for filtering bot comments (endswith("[bot]"))
- Clarify that some bots only post comments without GitHub checks
- Remove over-prescriptive polling intervals
- Align terminology: "ending with" to match jq pattern
- Positive framing for productive-waiting section

Prevents Claude from filtering by specific bot names instead of using
the pattern match, which caused missed comments from bots like
chatgpt-codex-connector.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@TechNickAI TechNickAI force-pushed the fix/address-pr-comments-bot-detection branch from e9be2eb to 862613a Compare January 10, 2026 19:32
@claude
Copy link

claude bot commented Jan 10, 2026

Code Review - PR #30: Improve bot detection in address-pr-comments

Summary

This PR makes excellent improvements to the /address-pr-comments command by providing explicit, copy-paste jq patterns for bot detection and removing over-prescriptive timing details. The changes align well with the prompt engineering principles documented in rules/prompt-engineering.mdc.

Strengths

1. Executable patterns over prose (lines 92-100)
The addition of explicit jq patterns is a significant improvement:

gh api repos/{owner}/{repo}/issues/{pr}/comments --jq '.[] | select(.user.login | endswith("[bot]"))'

This follows the "Pattern Reinforcement Through Examples" principle - giving Claude exact code to copy instead of requiring interpretation. The root cause analysis in the PR description confirms this was causing Claude to write incorrect filters.

2. Alignment with prompt-engineering.mdc principles

  • ✅ Removes over-prescriptive details (polling intervals) - trusts the executing LLM
  • ✅ Positive framing: "Use wait time productively" vs "Don't just sleep" (line 144)
  • ✅ Consistent terminology: "ending with [bot]" matches the jq pattern
  • ✅ Clear over clever: Direct instructions with executable examples

3. Improved bot coverage (lines 128-131)
Explicitly noting that some bots (chatgpt-codex-connector) only post comments without checks prevents premature exit. This is a subtle but important edge case.

Code Quality

Version bumps are correct:

  • .claude-plugin/marketplace.json: 9.4.0 → 9.4.1 (marketplace)
  • marketplace.json: 8.8.0 → 8.8.1 (plugin)
  • address-pr-comments.md: 1.7.1 → 1.7.2 (command)

All follow semantic versioning appropriately (patch-level changes for bug fixes).

Consistency:
The changes maintain consistency across all three references to bot detection:

  • Line 96: jq pattern with endswith("[bot]")
  • Line 109: prose description "ending with [bot]"
  • Line 118: behavior description "ending with [bot]"

Potential Concerns

1. Documentation completeness
The jq patterns show two separate API calls. Consider adding a note about why both endpoints are necessary (some context exists at lines 80-90, but could be clearer near the examples).

Suggestion: Add a brief comment in the code block:

# PR-level (issues endpoint) - Claude Code Review posts here
gh api repos/{owner}/{repo}/issues/{pr}/comments --jq '.[] | select(.user.login | endswith("[bot]"))'

# Line-level (pulls endpoint) - Cursor, Codex, Greptile post here
gh api repos/{owner}/{repo}/pulls/{pr}/comments --jq '.[] | select(.user.login | endswith("[bot]"))'

2. Pattern consistency in examples
The jq filter uses endswith("[bot]") which assumes all bot usernames end with exactly [bot]. This is correct for current bots, but consider documenting what happens if GitHub introduces bots with different naming conventions (e.g., bot-username or github-actions[bot]).

3. Removed polling guidance (line 137)
While removing "60-90 seconds" aligns with the goal-focused principle, some executing LLMs might poll too aggressively or not at all. Consider adding a single high-level constraint:

Poll periodically between productive-waiting activities rather than sleeping idle. Balance responsiveness with avoiding excessive API calls.

Security & Performance

✅ No security concerns - read-only API operations
✅ Performance improvement - clearer patterns reduce trial-and-error
✅ API usage - existing rate limit considerations still apply

Test Coverage

The PR description includes a test plan, but I don't see automated tests. Consider adding:

  • Integration test that verifies the jq pattern matches known bot usernames
  • Test case for mixed bot/human comments
  • Regression test for the original bug (filtering by exact bot name)

These could live in a test suite or as documented manual test cases.

Alignment with Project Standards

From CLAUDE.md:

  • ✅ Follows "goal-focused prompting" from prompt-engineering.mdc
  • ✅ Uses copy-paste patterns instead of interpretation
  • ✅ Removes over-prescriptive steps
  • ✅ Positive framing in instructions

From git-interaction.mdc:

  • ✅ Commit message follows format: ✨ Improve bot detection in address-pr-comments
  • ✅ PR description explains the "why" (root cause analysis)

Recommendations

Minor improvements:

  1. Add inline comments to the jq examples explaining which bots post where
  2. Consider adding a note about rate limiting if polling is very frequent
  3. Document the jq pattern as the canonical approach (so future updates don't regress)

Follow-up work:

  • Consider creating a shared jq pattern constant/variable if this filter is used elsewhere
  • Add integration tests for bot detection logic
  • Update any related documentation that mentions bot filtering

Verdict

Approve with minor suggestions. This is a well-thought-out fix that addresses the root cause and aligns excellently with the project's prompt engineering principles. The changes are focused, well-documented, and improve the reliability of the command.

The explicit jq patterns are exactly the right solution - they teach by example rather than requiring interpretation, which is the core principle of good LLM-to-LLM communication.


🤖 Review generated by Claude Code

Clarifies which bots post at which API level:
- PR-level: Claude Code Review
- Line-level: Cursor, Codex, Greptile

Addresses Claude bot review feedback.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@claude
Copy link

claude bot commented Jan 10, 2026

Code Review

This PR demonstrates excellent application of the prompt engineering principles from prompt-engineering.mdc. The changes directly address LLM pattern-matching behavior with concrete, actionable improvements.

✅ Strengths

1. Copy-paste patterns over prose interpretation

The addition of explicit jq patterns is spot-on:

# PR-level (issues endpoint) - Claude Code Review posts here
gh api repos/{owner}/{repo}/issues/{pr}/comments --jq '.[] | select(.user.login | endswith("[bot]"))'

# Line-level (pulls endpoint) - Cursor, Codex, Greptile post here
gh api repos/{owner}/{repo}/pulls/{pr}/comments --jq '.[] | select(.user.login | endswith("[bot]"))'

This perfectly implements the "examples teach patterns" principle - instead of describing what to do, you show exactly what to copy. The executing LLM will use this verbatim rather than constructing its own interpretation.

2. Terminology alignment

Changing "any username containing `[bot]`" to "any username ending with `[bot]`" aligns the prose with the actual pattern (endswith("[bot]")). This consistency eliminates ambiguity between what's described and what's shown.

3. Positive framing

"Use wait time productively" vs. "Don't just sleep" is a textbook application of the positive framing principle. The executing LLM gets a clear directive rather than having to construct and negate a pattern.

4. Goals over process

Removing specific polling intervals ("60-90 seconds", "2-3 minute intervals") trusts the executing model to determine appropriate timing. This aligns with the "assume the executing model is smarter" principle - future models can optimize polling based on actual bot response patterns.

5. Critical clarification on bot types

The addition that "some bots (chatgpt-codex-connector) only post comments without GitHub checks" prevents premature exit from the workflow. This is important defensive documentation.

🎯 Suggested Improvements

1. Pattern consistency in comments

The inline comments in the jq examples are helpful, but they introduce slight inconsistency:

# PR-level (issues endpoint) - Claude Code Review posts here
# Line-level (pulls endpoint) - Cursor, Codex, Greptile post here

Consider making these parallel in structure:

# PR-level (issues endpoint): Claude Code Review
# Line-level (pulls endpoint): Cursor, Codex, Greptile

The parallel structure reinforces the pattern more clearly.

2. Version bump alignment

The marketplace.json shows a patch version bump (9.4.0 → 9.4.1, 8.8.0 → 8.8.1). Given that this fixes a functional bug (incorrect jq patterns being generated), this is appropriate. However, you might consider documenting the versioning convention - is a patch bump appropriate for "fixes LLM interpretation bugs" vs. "fixes code bugs"?

3. Test plan checkbox

The PR description includes an unchecked test plan:

- [ ] Run `/address-pr-comments` on a PR with multiple bot types (Claude, Cursor, Codex)
- [ ] Verify all bot comments are fetched regardless of specific bot name
- [ ] Confirm the jq pattern is used as-is from the documentation

This is great, but it would be valuable to know if these tests were executed. If they were, checking the boxes would demonstrate validation. If not, that's worth flagging.

📝 Minor Observations

File organization: The changes are isolated to the right files - command documentation and version numbers only. No scope creep.

Commit message: "✨ Improve bot detection in address-pr-comments" - clear, follows repo conventions with emoji prefix.

PR description: Excellent use of the "Root cause" section explaining why the change was needed. This creates valuable documentation for future maintainers.

🔒 No Security Concerns

The jq patterns use endswith("[bot]") which is appropriate for matching GitHub bot usernames. No injection risks or security issues identified.

⚡ Performance

The change from potentially incorrect filtering (missing bots) to correct filtering (catching all bots) improves reliability without performance cost. The jq filtering is efficient.

🧪 Test Coverage

No automated tests are present for this command file, which is reasonable given it's LLM-executable documentation rather than code. The manual test plan in the PR description is appropriate for this type of change.

Summary

This is a high-quality PR that demonstrates deep understanding of LLM-to-LLM communication patterns. The changes directly address the root cause (LLMs reproducing patterns from examples rather than interpreting prose) with the correct solution (providing exact patterns to copy).

Recommendation: Approve and merge after addressing the minor test plan checkbox question.

The changes align perfectly with prompt-engineering.mdc principles and will improve reliability of the /address-pr-comments workflow.


Review generated with Claude Sonnet 4.5 following CLAUDE.md conventions

@TechNickAI TechNickAI merged commit 61d06f3 into main Jan 10, 2026
3 checks passed
@TechNickAI TechNickAI deleted the fix/address-pr-comments-bot-detection branch January 10, 2026 19:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants