-
Notifications
You must be signed in to change notification settings - Fork 182
feat: Support citation for agentic template #642
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🦋 Changeset detectedLatest commit: 8764d93 The changes in this PR will be included in the next version bump. This PR includes changesets to release 4 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
WalkthroughThis update introduces citation support for query engine tools and agent workflows, including new classes and functions for citation processing and synthesizer integration. It adds a LlamaCloud FastAPI example, updates OpenAI model defaults, exposes new public entities, and modifies file naming for LlamaCloud downloads. The chat UI dependency is also bumped. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant FastAPI_App
participant LlamaIndexServer
participant AgentWorkflow
participant QueryEngineTool
participant CitationSynthesizer
User->>FastAPI_App: Sends chat request
FastAPI_App->>LlamaIndexServer: Forwards request
LlamaIndexServer->>AgentWorkflow: Creates workflow (with citation support)
AgentWorkflow->>QueryEngineTool: Executes tool (enable_citation=True)
QueryEngineTool->>CitationSynthesizer: Synthesizes response with citations
CitationSynthesizer-->>QueryEngineTool: Returns citation-annotated response
QueryEngineTool-->>AgentWorkflow: Returns response
AgentWorkflow-->>LlamaIndexServer: Returns workflow output
LlamaIndexServer-->>FastAPI_App: Returns chat response
FastAPI_App-->>User: Delivers response with citations
Possibly related PRs
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms (53)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
…ource node transformation, and update response synthesizer in query engine creation
…lock.yaml; enhance citation instructions in prompts for clarity and correctness.
…owledge base access and citation format in query responses.
…t; enhance system prompt for clarity in knowledge base responses.
…ol description for clarity on usage and citation format.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
python/llama-index-server/uv.lock
is excluded by!**/*.lock
📒 Files selected for processing (14)
packages/server/package.json
(1 hunks)python/llama-index-server/examples/llamacloud/main.py
(1 hunks)python/llama-index-server/llama_index/server/api/callbacks/__init__.py
(2 hunks)python/llama-index-server/llama_index/server/api/callbacks/agent_call_tool.py
(1 hunks)python/llama-index-server/llama_index/server/api/callbacks/source_nodes.py
(2 hunks)python/llama-index-server/llama_index/server/api/models.py
(3 hunks)python/llama-index-server/llama_index/server/api/routers/chat.py
(2 hunks)python/llama-index-server/llama_index/server/prompts.py
(1 hunks)python/llama-index-server/llama_index/server/services/llamacloud/file.py
(1 hunks)python/llama-index-server/llama_index/server/tools/index/__init__.py
(1 hunks)python/llama-index-server/llama_index/server/tools/index/citation_agent.py
(1 hunks)python/llama-index-server/llama_index/server/tools/index/node_citation_processor.py
(1 hunks)python/llama-index-server/llama_index/server/tools/index/query.py
(2 hunks)python/llama-index-server/pyproject.toml
(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (6)
python/llama-index-server/llama_index/server/api/callbacks/source_nodes.py (1)
python/llama-index-server/llama_index/server/api/models.py (1)
SourceNodesEvent
(59-71)
python/llama-index-server/llama_index/server/api/routers/chat.py (1)
python/llama-index-server/llama_index/server/api/callbacks/agent_call_tool.py (1)
AgentCallTool
(11-26)
python/llama-index-server/llama_index/server/tools/index/citation_agent.py (1)
python/llama-index-server/llama_index/server/tools/index/query.py (1)
get_query_engine_tool
(51-80)
python/llama-index-server/llama_index/server/tools/index/__init__.py (1)
python/llama-index-server/llama_index/server/tools/index/node_citation_processor.py (1)
NodeCitationProcessor
(8-21)
python/llama-index-server/llama_index/server/services/llamacloud/file.py (1)
python/llama-index-server/llama_index/server/api/models.py (2)
SourceNodes
(74-138)get_local_llamacloud_file_name
(95-101)
python/llama-index-server/examples/llamacloud/main.py (1)
python/llama-index-server/llama_index/server/api/models.py (1)
ChatRequest
(25-33)
🪛 GitHub Actions: Build Package
python/llama-index-server/llama_index/server/tools/index/citation_agent.py
[error] 9-9: Function is missing a type annotation for one or more arguments (mypy no-untyped-def error)
python/llama-index-server/llama_index/server/tools/index/query.py
[error] 79-79: "QueryEngineTool" has no attribute "citation_system_prompt" (mypy attr-defined error)
⏰ Context from checks skipped due to timeout of 90000ms (50)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
🔇 Additional comments (26)
packages/server/package.json (1)
62-62
: Bump @llamaindex/chat-ui dependency to 0.4.6
This aligns the UI package with the new citation-aware agent workflows. Please regenerate your lockfile (pnpm install
) and verify there are no breaking changes introduced in v0.4.6.python/llama-index-server/pyproject.toml (2)
64-64
: LGTM: Dependency addition for citation functionality.The addition of
llama-index-embeddings-openai>=0.3.1
to the development dependencies aligns well with the citation support feature. The version constraint follows the established pattern in the project.
69-69
: LGTM: Build artifacts configuration updated.Adding the server resources directory to build artifacts ensures that necessary resource files are included in the package distribution.
python/llama-index-server/llama_index/server/api/callbacks/__init__.py (1)
1-1
: LGTM: Proper import and export of new callback.The
AgentCallTool
import and export follows the established pattern in this module, making the new callback properly available through the package's public API.Also applies to: 14-14
python/llama-index-server/llama_index/server/api/routers/chat.py (1)
16-16
: LGTM: Clean integration of agent tool callback.The
AgentCallTool
is properly imported and instantiated in the callbacks list, enabling conversion of tool call events to agent run events during chat workflows. This integration follows the established pattern for adding event callbacks.Also applies to: 57-57
python/llama-index-server/llama_index/server/tools/index/__init__.py (1)
2-2
: LGTM: Proper import and export of citation processor.The
NodeCitationProcessor
import and export follows the established pattern, making the new citation processing functionality properly available through the package's public API.Also applies to: 4-4
python/llama-index-server/llama_index/server/prompts.py (2)
10-16
: LGTM: Formatting improvements for markdown rendering.The formatting adjustments to the code block delimiters ensure proper markdown rendering in documentation or UI contexts.
17-51
: Well-structured citation prompt template.The
CITATION_PROMPT
constant provides comprehensive guidance for citation-aware response generation with:
- Clear context and query placeholders
- Specific citation format requirements (
[citation:id]
)- Detailed example demonstrating proper usage
- Explicit requirements for citation inclusion
The prompt effectively balances instruction clarity with practical examples, which should help ensure consistent citation formatting across the system.
python/llama-index-server/llama_index/server/tools/index/node_citation_processor.py (1)
8-21
: Clean and focused implementation for citation support.The
NodeCitationProcessor
provides a simple, effective solution for adding citation metadata to nodes. The implementation correctly:
- Inherits from the appropriate base class
- Implements the required
_postprocess_nodes
method- Copies node IDs to citation_id metadata for downstream citation construction
- Maintains the original nodes list structure
The approach is lightweight and integrates well with the citation system described in the prompts.
python/llama-index-server/llama_index/server/services/llamacloud/file.py (1)
159-160
: Good refactoring to centralize file naming logic.The change consolidates file naming logic by delegating to
SourceNodes.get_local_llamacloud_file_name()
, which provides consistent sanitization and pipeline ID appending. This eliminates code duplication and ensures uniform file naming across the system.The method is properly imported from
llama_index.server.api.models
(line 12) and the usage is straightforward.python/llama-index-server/llama_index/server/api/callbacks/source_nodes.py (2)
1-1
: Correct import addition for Optional type annotation.The
Optional
import is properly added to support the updated return type annotation.
20-29
: Excellent error handling improvement.The addition of error checking prevents processing when
tool_output.is_error
is true, which is a robust defensive programming practice. The changes:
- Properly handle tool execution failures by returning
None
- Update the return type annotation to
Optional[SourceNodesEvent]
- Maintain the existing logic for successful tool outputs
- Follow the early return pattern for cleaner code flow
This prevents potential exceptions or incorrect data processing downstream when tools encounter errors.
python/llama-index-server/llama_index/server/api/models.py (2)
94-101
: LGTM! Well-designed file name sanitization.This method effectively sanitizes LlamaCloud file names by:
- Preserving the original file extension
- Replacing invalid characters with underscores using a clear regex pattern
- Appending the pipeline ID for uniqueness
- Centralizing the file naming logic
The regex pattern
[^A-Za-z0-9_\-]
appropriately allows alphanumeric characters, underscores, and hyphens while replacing everything else.
118-118
: Good refactoring to use centralized file naming.The update correctly utilizes the new
get_local_llamacloud_file_name
method, improving consistency and maintainability compared to the previous inline string concatenation approach.python/llama-index-server/llama_index/server/api/callbacks/agent_call_tool.py (2)
11-22
: LGTM! Clean callback implementation for tool call events.The callback logic is well-implemented:
- Proper type checking to filter
ToolCall
events while excludingToolCallResult
- Clear and informative message formatting for the
AgentRunEvent
- Appropriate fallback behavior (returns event unchanged if not a tool call)
The adapter pattern is well-suited for this use case.
24-26
: Simple and appropriate factory method.The
from_default
class method follows the established pattern for callback instantiation without requiring additional configuration.python/llama-index-server/llama_index/server/tools/index/citation_agent.py (1)
25-26
: LGTM! Intelligent agent class selection.The logic to choose between
FunctionAgent
andReActAgent
based on the LLM's function calling capabilities is well-designed and ensures optimal agent performance.python/llama-index-server/examples/llamacloud/main.py (4)
14-19
: LGTM! Proper environment variable validation.The environment variable checks are appropriate and provide clear error messages for missing required configuration. This follows good practices for application configuration validation.
24-39
: Well-structured workflow creation with citation support.The workflow creation logic is well-implemented:
- Proper error handling for missing index
- Clear citation tool configuration
- Good system prompt composition that combines base instructions with citation-specific guidance
The approach of appending the citation system prompt maintains clarity while ensuring citation functionality is properly communicated to the agent.
42-51
: LGTM! Proper FastAPI app configuration.The app configuration appropriately:
- Sets development environment
- Disables next question suggestions (suitable for citation-focused use)
- Enables LlamaCloud index selector for UI flexibility
21-21
:✅ Verification successful
Verify the OpenAI model name.
The model "gpt-4.1" appears to be non-standard. Common OpenAI model names include "gpt-4", "gpt-4-turbo", "gpt-4o", etc. Please verify this is the correct model identifier.
🌐 Web query:
What are the current available OpenAI GPT-4 model names and is "gpt-4.1" a valid model?
💡 Result:
As of May 2025, OpenAI offers several models in the GPT-4 series, each tailored for specific applications:
GPT-4o: Released in May 2024, GPT-4o is a versatile, multimodal model capable of processing and generating text, images, and audio. It is faster and more cost-effective than its predecessor, GPT-4 Turbo, and is available to all users within a usage limit, with higher limits for paid subscribers. (en.wikipedia.org)
GPT-4.5: Introduced in February 2025, GPT-4.5, codenamed "Orion," is a large language model designed to reduce inaccuracies and enhance pattern recognition, creativity, and user interaction. It is available to ChatGPT Plus and Pro subscribers across various platforms and through the OpenAI API. (en.wikipedia.org)
GPT-4.1: Launched in April 2025, GPT-4.1, along with its variants GPT-4.1 mini and GPT-4.1 nano, offers significant improvements in coding capabilities, instruction following, and long-context comprehension. These models support up to 1 million tokens of context and are accessible exclusively via OpenAI's API. (openai.com)
Regarding "gpt-4.1," it is indeed a valid model name, referring to the GPT-4.1 series mentioned above.
Additionally, OpenAI has developed models beyond the GPT-4 series:
o1: Released in December 2024, o1 is designed for complex reasoning tasks, such as advanced mathematics and scientific analysis. It is available to ChatGPT Plus and Team members. (en.wikipedia.org)
o3: Unveiled in December 2024, o3 is the successor to o1, offering enhanced reasoning and problem-solving capabilities. It was released to all paid users in April 2025. (en.wikipedia.org)
o4-mini: Released in April 2025, o4-mini is a generative pre-trained transformer model capable of processing both text and images. It is designed to enhance decision-making across various sectors and is available to all ChatGPT users, including free-tier users. (en.wikipedia.org)
These models provide a range of capabilities to meet diverse user needs, from general-purpose tasks to specialized applications requiring advanced reasoning and multimodal processing.
Citations:
- 1: https://en.wikipedia.org/wiki/GPT-4o?utm_source=openai
- 2: https://en.wikipedia.org/wiki/GPT-4.5?utm_source=openai
- 3: https://openai.com/index/gpt-4-1/?utm_source=openai
- 4: https://en.wikipedia.org/wiki/OpenAI_o1?utm_source=openai
- 5: https://en.wikipedia.org/wiki/OpenAI?utm_source=openai
- 6: https://en.wikipedia.org/wiki/OpenAI_o4-mini?utm_source=openai
Model name validation resolved.
The model identifier “gpt-4.1” is valid—it was launched in April 2025 as part of OpenAI’s GPT-4.1 series. No change needed here.
python/llama-index-server/llama_index/server/tools/index/query.py (5)
1-1
: LGTM! Clean import organization and logging setup.The new imports are well-organized and all necessary for the citation functionality. The logger setup follows standard practices.
Also applies to: 7-11, 13-13
16-17
: Good API design with sensible defaults.The
enable_citation
parameter with a default value ofTrue
provides good backward compatibility while enabling the new feature by default.Also applies to: 24-24
31-47
: Excellent conflict handling and citation setup logic.The implementation properly handles the case where a custom response synthesizer conflicts with citation functionality by logging a warning rather than silently overriding. The citation setup is well-structured with environment variable support for the prompt template.
70-72
: Clear and informative description enhancement.The conditional addition of citation information to the tool description provides users with clear expectations about the output format when citations are enabled.
79-79
:❓ Verification inconclusive
Fix the attribute assignment - QueryEngineTool doesn't have citation_system_prompt attribute.
The pipeline failure indicates that
QueryEngineTool
doesn't have acitation_system_prompt
attribute. This will cause a runtime AttributeError.You need to either:
- Add this attribute to the
QueryEngineTool
class definition, or- Use a different approach to store this citation prompt (e.g., metadata, custom tool wrapper)
🏁 Script executed:
#!/bin/bash # Verify the QueryEngineTool class definition and available attributes ast-grep --pattern 'class QueryEngineTool($_) { $$$ }'Length of output: 58
🏁 Script executed:
#!/bin/bash # Searching for QueryEngineTool class definition and usage, and attribute references rg -R "class QueryEngineTool" -n . rg -R "QueryEngineTool" -n . rg -R "citation_system_prompt" -n .Length of output: 1316
🏁 Script executed:
#!/bin/bash # Locate all query.py files to verify the path echo "Finding all query.py files:" find . -type f -name "query.py" # Show the first 200 lines of the specific file under review echo -e "\nContents of python/llama-index-server/llama_index/server/tools/index/query.py:" sed -n '1,200p' python/llama-index-server/llama_index/server/tools/index/query.py # Search for QueryEngineTool references and class definitions across the repo echo -e "\nSearching for 'QueryEngineTool' across the repo:" grep -R "QueryEngineTool" -n . # Search for citation_system_prompt usage echo -e "\nSearching for 'citation_system_prompt' assignments:" grep -R "citation_system_prompt" -n .Length of output: 8519
Verify
QueryEngineTool.citation_system_prompt
AttributeUnable to locate a local definition of
QueryEngineTool
in the codebase, so it’s unclear whether the upstream class supports acitation_system_prompt
assignment. If this attribute is missing at runtime, you’ll hit anAttributeError
.• File: python/llama-index-server/llama_index/server/tools/index/query.py
Line 79:tool.citation_system_prompt = "Answer the user question with citations for the parts that uses the information from the knowledge base."Please confirm (against the installed
llama_index
package) thatQueryEngineTool
exposes acitation_system_prompt
property. If it does not, consider one of the following:
- Subclass or monkey-patch
QueryEngineTool
to add this attribute.- Pass the citation prompt via an existing constructor or metadata parameter.
- Wrap
QueryEngineTool
in a custom tool that injects your prompt.🧰 Tools
🪛 GitHub Actions: Build Package
[error] 79-79: "QueryEngineTool" has no attribute "citation_system_prompt" (mypy attr-defined error)
python/llama-index-server/llama_index/server/tools/index/citation_agent.py
Outdated
Show resolved
Hide resolved
python/llama-index-server/llama_index/server/tools/index/citation_agent.py
Outdated
Show resolved
Hide resolved
python/llama-index-server/llama_index/server/tools/index/query.py
Outdated
Show resolved
Hide resolved
python/llama-index-server/llama_index/server/tools/index/citation_agent.py
Outdated
Show resolved
Hide resolved
…ject.toml and uv.lock to exclude llama-index-embeddings-openai; adjust query engine to disable citation by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (3)
python/llama-index-server/llama_index/server/api/callbacks/source_nodes.py (1)
57-61
: Simplify nested if statements.The static analysis correctly suggests combining the nested if statements for better readability.
Apply this diff to simplify the nested conditions:
async def run(self, event: Any) -> Any: - if isinstance(event, ToolCallResult): - if self.is_query_engine_tool(event.tool_name): - return event, self.transform_tool_call_result(event) + if isinstance(event, ToolCallResult) and self.is_query_engine_tool(event.tool_name): + return event, self.transform_tool_call_result(event) return event🧰 Tools
🪛 Ruff (0.11.9)
58-59: Use a single
if
statement instead of nestedif
statements(SIM102)
python/llama-index-server/llama_index/server/tools/index/citation.py (2)
68-73
: Improve tool name modification approach.The automatic suffix addition could potentially create naming conflicts or unexpected behavior. Consider providing more control to the caller.
Consider these improvements:
- Return the modified tool name so callers are aware of the change
- Add a parameter to control whether automatic suffix addition is allowed
- Consider validation for potential naming conflicts
Example enhancement:
- if not tool_name.endswith("_query_engine"): - logger.warning( - "Your query engine tool name is required to end with '_query_engine'. " - "We will add the '_query_engine' suffix to the tool name." - ) - tool_name += "_query_engine" + if not tool_name.endswith("_query_engine"): + new_tool_name = tool_name + "_query_engine" + logger.warning( + f"Query engine tool name must end with '_query_engine'. " + f"Renamed from '{tool_name}' to '{new_tool_name}'." + ) + query_engine_tool.metadata.name = new_tool_name
80-80
: Consider making the citation format configurable.The hardcoded citation format
[citation:id]
in the tool description might not be suitable for all use cases.Consider making the citation format configurable:
- query_engine_tool.metadata.description += "\nThe output would include citations with the format [citation:id] for each chunk of information in the knowledge base." + citation_format = "[citation:id]" # Could be a parameter + query_engine_tool.metadata.description += f"\nThe output would include citations with the format {citation_format} for each chunk of information in the knowledge base."
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
.changeset/few-news-marry.md
(1 hunks).changeset/rich-nights-hug.md
(1 hunks).changeset/small-insects-hug.md
(1 hunks).changeset/stale-things-beg.md
(1 hunks)packages/create-llama/templates/components/use-cases/python/agentic_rag/workflow.py
(1 hunks)packages/create-llama/templates/types/llamaindexserver/fastapi/app/settings.py
(1 hunks)python/llama-index-server/examples/llamacloud/main.py
(1 hunks)python/llama-index-server/llama_index/server/api/callbacks/source_nodes.py
(1 hunks)python/llama-index-server/llama_index/server/prompts.py
(1 hunks)python/llama-index-server/llama_index/server/tools/index/__init__.py
(1 hunks)python/llama-index-server/llama_index/server/tools/index/citation.py
(1 hunks)python/llama-index-server/llama_index/server/tools/index/query.py
(2 hunks)
✅ Files skipped from review due to trivial changes (3)
- .changeset/stale-things-beg.md
- packages/create-llama/templates/types/llamaindexserver/fastapi/app/settings.py
- .changeset/small-insects-hug.md
🚧 Files skipped from review as they are similar to previous changes (4)
- python/llama-index-server/llama_index/server/prompts.py
- python/llama-index-server/llama_index/server/tools/index/init.py
- python/llama-index-server/llama_index/server/tools/index/query.py
- python/llama-index-server/examples/llamacloud/main.py
🧰 Additional context used
🪛 Ruff (0.11.9)
python/llama-index-server/llama_index/server/tools/index/citation.py
43-43: Star-arg unpacking after a keyword argument is strongly discouraged
(B026)
python/llama-index-server/llama_index/server/api/callbacks/source_nodes.py
24-27: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
58-59: Use a single if
statement instead of nested if
statements
(SIM102)
⏰ Context from checks skipped due to timeout of 90000ms (54)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
- GitHub Check: Unit Tests (windows-latest, 3.9)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: Unit Tests (ubuntu-latest, 3.9)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
- GitHub Check: lint
🔇 Additional comments (9)
.changeset/few-news-marry.md (1)
1-6
: LGTM! Clear changeset documentation.The changeset clearly documents the addition of agent widget display functionality during tool calls.
.changeset/rich-nights-hug.md (1)
1-6
: LGTM! Accurate changeset documentation.The changeset properly documents the addition of citation support for query engine tools.
packages/create-llama/templates/components/use-cases/python/agentic_rag/workflow.py (3)
16-17
: LGTM! Proper citation enablement.Enabling citation support in the query engine tool is correctly implemented and aligns with the PR objectives.
19-22
: LGTM! Well-structured system prompt construction.The system prompt construction properly combines the base assistant prompt with the citation-specific instructions from the query tool.
26-27
: LGTM! Good use of Settings.llm.Using
Settings.llm
instead of a hardcoded OpenAI model provides better flexibility and configuration management.python/llama-index-server/llama_index/server/api/callbacks/source_nodes.py (2)
38-41
: LGTM! Well-implemented tool identification logic.The
is_query_engine_tool
method provides good flexibility for identifying query tools using either the deprecated exact name matching or the new suffix-based approach.
43-55
: LGTM! Proper error handling and source node extraction.The enhanced
transform_tool_call_result
method correctly handles error cases and validates the presence of source nodes before creating the event.python/llama-index-server/llama_index/server/tools/index/citation.py (2)
16-29
: LGTM! Clean and focused implementation.The
NodeCitationProcessor
class is well-designed with a clear purpose - adding citation metadata by copying the node ID. The implementation is straightforward and follows the postprocessor pattern correctly.
47-51
: LGTM! Clear guidance for agents.The
CITATION_SYSTEM_PROMPT
provides clear instructions for agents to maintain citation integrity, which is essential for citation functionality.
python/llama-index-server/llama_index/server/api/callbacks/source_nodes.py
Outdated
Show resolved
Hide resolved
python/llama-index-server/llama_index/server/tools/index/citation.py
Outdated
Show resolved
Hide resolved
…suffixes parameter; update CitationSynthesizer constructor to accept keyword arguments for improved flexibility.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
python/llama-index-server/llama_index/server/api/callbacks/source_nodes.py (1)
60-63
: Simplify nested if statementsThe static analysis correctly suggests combining the nested if statements for better readability.
Apply this diff to simplify the logic:
async def run(self, event: Any) -> Any: - if isinstance(event, ToolCallResult): - if self.is_query_engine_tool(event.tool_name): - return event, self.transform_tool_call_result(event) + if isinstance(event, ToolCallResult) and self.is_query_engine_tool(event.tool_name): + return event, self.transform_tool_call_result(event) return event🧰 Tools
🪛 Ruff (0.11.9)
60-61: Use a single
if
statement instead of nestedif
statements(SIM102)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
python/llama-index-server/llama_index/server/api/callbacks/source_nodes.py
(1 hunks)python/llama-index-server/llama_index/server/tools/index/citation.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- python/llama-index-server/llama_index/server/tools/index/citation.py
🧰 Additional context used
🪛 Ruff (0.11.9)
python/llama-index-server/llama_index/server/api/callbacks/source_nodes.py
60-61: Use a single if
statement instead of nested if
statements
(SIM102)
⏰ Context from checks skipped due to timeout of 90000ms (56)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
- GitHub Check: Unit Tests (windows-latest, 3.9)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
- GitHub Check: Unit Tests (ubuntu-latest, 3.9)
- GitHub Check: lint
🔇 Additional comments (6)
python/llama-index-server/llama_index/server/api/callbacks/source_nodes.py (6)
1-2
: LGTM: Clean import additionsThe logging import and updated type hints are appropriate for the new functionality.
8-9
: LGTM: Standard logger initializationProper logger setup following Python best practices.
16-20
: LGTM: Clear and accurate docstringThe updated docstring clearly documents the new parameters and their defaults.
22-39
: Excellent refactoring with proper deprecation handlingThe mutable default argument issue from the previous review has been correctly fixed. The deprecation warning implementation follows best practices and maintains backward compatibility.
40-43
: LGTM: Well-designed helper methodThe method cleanly handles both the deprecated exact match and the new suffix-based matching approach.
45-57
: LGTM: Robust error handlingThe method properly checks for errors in tool output and validates the presence of source nodes before creating the event.
…low creation and update system prompt for improved clarity.
python/llama-index-server/llama_index/server/tools/index/citation.py
Outdated
Show resolved
Hide resolved
python/llama-index-server/llama_index/server/api/callbacks/source_nodes.py
Outdated
Show resolved
Hide resolved
…eter and improve source node extraction logic; update citation description for clarity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
python/llama-index-server/llama_index/server/utils/llamacloud.py (1)
1-4
: Consider adding pathlib import for more modern path handling.While the current implementation works, consider adding
pathlib
for more robust path operations if file path manipulation becomes more complex in the future.import os +from pathlib import Path import re from typing import Any, Dict, Optional
This would allow using
Path(llamacloud_file_name).suffix
instead ofos.path.splitext()
for potentially cleaner code, though the current approach is perfectly valid.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
python/llama-index-server/llama_index/server/api/callbacks/source_nodes.py
(1 hunks)python/llama-index-server/llama_index/server/api/models.py
(2 hunks)python/llama-index-server/llama_index/server/services/llamacloud/file.py
(2 hunks)python/llama-index-server/llama_index/server/tools/index/citation.py
(1 hunks)python/llama-index-server/llama_index/server/utils/llamacloud.py
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- python/llama-index-server/llama_index/server/services/llamacloud/file.py
🚧 Files skipped from review as they are similar to previous changes (2)
- python/llama-index-server/llama_index/server/api/models.py
- python/llama-index-server/llama_index/server/tools/index/citation.py
⏰ Context from checks skipped due to timeout of 90000ms (57)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: Unit Tests (windows-latest, 3.9)
- GitHub Check: Unit Tests (ubuntu-latest, 3.9)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: lint
🔇 Additional comments (5)
python/llama-index-server/llama_index/server/utils/llamacloud.py (1)
31-32
: LGTM! Simple and effective helper function.The function correctly checks for the presence of
pipeline_id
in the metadata. The implementation is clean and serves its purpose well.python/llama-index-server/llama_index/server/api/callbacks/source_nodes.py (4)
1-9
: LGTM! Clean imports and logger setup.The imports are well-organized and the logger setup follows best practices.
17-23
: Good refactor addressing previous mutable default argument concerns.The constructor now properly handles the deprecated
tool_name
parameter with appropriate warnings, addressing the previous code review feedback about mutable defaults.
42-48
: Excellent simplification of the run method logic.The refactored method is much cleaner and more maintainable than the previous implementation. The logic properly handles the event flow and source node extraction.
24-40
: 🛠️ Refactor suggestionAdd null safety check for raw_output.
The type checking logic is excellent and addresses previous concerns about brittle suffix checking. However, consider adding a null check for
raw_output
to prevent potential AttributeError.def _get_source_nodes(self, event: ToolCallResult) -> Optional[List[NodeWithScore]]: # If result is not error if event.tool_output.is_error: return None # If result is not error, check if source nodes are in the tool output raw_output = event.tool_output.raw_output + if raw_output is None: + return None if hasattr(raw_output, "source_nodes"):Likely an incorrect or invalid review comment.
python/llama-index-server/llama_index/server/tools/index/citation.py
Outdated
Show resolved
Hide resolved
…TH environment variable
…ove unnecessary build-system configuration from pyproject.toml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
packages/create-llama/helpers/python.ts (1)
8-8
: Consider using the existingisCI
definition for consistency.There's already an
isCI
definition inpackages/create-llama/questions/index.ts
that includes additional logic for Playwright tests. Consider importing from there instead of directly from "ci-info" to maintain consistency across the codebase.-import { isCI } from "ci-info"; +import { isCI } from "../questions";
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/e2e.yml
(2 hunks)packages/create-llama/helpers/python.ts
(2 hunks)python/llama-index-server/llama_index/server/tools/index/citation.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- python/llama-index-server/llama_index/server/tools/index/citation.py
🧰 Additional context used
🧬 Code Graph Analysis (1)
packages/create-llama/helpers/python.ts (1)
packages/create-llama/questions/index.ts (1)
isCI
(8-8)
⏰ Context from checks skipped due to timeout of 90000ms (29)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
🔇 Additional comments (1)
.github/workflows/e2e.yml (1)
85-85
: LGTM! Environment variable properly passed to tests.The
SERVER_PACKAGE_PATH
environment variable is correctly passed to the Playwright tests, enabling the CI-specific dependency handling implemented in the Python helper.
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Chores
@llamaindex/chat-ui
to 0.4.6.