[APIView Copilot] Include full comment thread in Report Issue context#15524
Merged
tjprescott merged 3 commits intomainfrom May 8, 2026
Merged
[APIView Copilot] Include full comment thread in Report Issue context#15524tjprescott merged 3 commits intomainfrom
tjprescott merged 3 commits intomainfrom
Conversation
get_comment_with_context now also fetches sibling comments sharing the anchor's ThreadId, ordered by CreatedOn, and exposes them as 'thread_comments'. _lookup_comment_context forwards the list (id, text, source, author, created_on) to the prompt context, and _format_comment_context_for_prompt renders them as a chronological 'Thread:' transcript when there is more than one comment so the LLM sees the full conversation, not just the anchor comment. Single-comment threads keep the existing single-line 'Comment:' rendering.
Contributor
There was a problem hiding this comment.
Pull request overview
This PR improves the APIView Copilot “Report Issue” flow by including the full comment thread (not just the anchor comment) in the context sent to the LLM and (when needed) in the deterministic fallback issue body, enabling better issue triage from the surrounding discussion.
Changes:
- Extend APIView DB lookup to fetch all comments in the same
ThreadId(chronological) and return them asthread_comments. - Update report-issue context mapping and prompt formatting to optionally emit a
Thread:transcript when 2+ comments exist. - Update the prompt documentation and unit tests to reflect and verify the new thread behavior.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| packages/python-packages/apiview-copilot/src/_apiview.py | Adds thread-level Cosmos query and returns thread_comments alongside the anchor comment context. |
| packages/python-packages/apiview-copilot/src/_report_issue.py | Normalizes thread comment payload and formats a Thread: transcript for the prompt/fallback body. |
| packages/python-packages/apiview-copilot/prompts/report_issue/generate_issue.prompty | Documents the new Thread: block semantics for the LLM. |
| packages/python-packages/apiview-copilot/tests/report_issue_test.py | Adds/updates tests for thread mapping and transcript rendering behavior. |
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
1. Avoid GitHub mention spam in fallback issue body: _format_comment_context_for_prompt now accepts escape_mentions=True (used by _build_fallback_body) to wrap thread-author tokens in backticks (e.g. `@alice`) so the deterministic fallback body can never accidentally @-notify real users. The LLM prompt path still gets bare @author so the model sees authorship clearly. 2. Push the IsDeleted filter into the Cosmos thread query (AND (NOT IS_DEFINED(c.IsDeleted) OR c.IsDeleted = false)) so we no longer transfer/sort tombstoned comments client-side.
tjprescott
approved these changes
May 8, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
When a user clicks Report Issue on a comment in APIView, the backend currently sees only the anchor comment — sibling replies in the same thread are dropped, so the LLM has no view of the conversation that motivated the report.
This PR fetches the full thread and renders it for the LLM.
Changes
_apiview.py—get_comment_with_contextCommentscontainer filtered byThreadId, ordered byCreatedOn ASC.IsDeletedentries; falls back to[anchor]when no siblings exist or the lookup fails.thread_commentsfield in the return dict (the anchor comment shape is unchanged)._report_issue.py_lookup_comment_contextforwards the thread as a normalized list of{id, comment_text, comment_source, created_by, created_on}dicts._format_comment_context_for_promptemits a chronologicalThread:transcript (@author (timestamp): text) only when the thread has 2+ comments. Single-comment threads keep the existing one-lineComment:rendering.generate_issue.promptyThread:block in thecomment_contextdescription so the LLM knows to use the full transcript instead of only the anchorComment:line.Tests
TestLookupCommentContext.test_maps_db_payload_to_comment_contextto assert the newthread_commentsmapping.test_single_comment_thread_does_not_emit_transcripttest_multi_comment_thread_emits_transcriptFollow-up
Pairs with the FE fix on PR #15347 (commit
c2a75678a) which makes the dialog card use the clicked comment instead ofcomments[0].