-
Notifications
You must be signed in to change notification settings - Fork 187
feat: bump chat-ui with inline artifact #675
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🦋 Changeset detectedLatest commit: 5c1de3c The changes in this PR will be included in the next version bump. This PR includes changesets to release 4 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
WalkthroughThis change introduces inline artifact annotation support across both JavaScript/TypeScript and Python codebases. It refactors artifact event handling to use shared imports, modularizes annotation extraction and serialization, and updates chat workflows to process artifacts as inline markdown annotations. New utility modules are added for parsing and generating these annotations. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant ChatUI
participant Server
participant Workflow
participant ArtifactTransform
User->>ChatUI: Sends message
ChatUI->>Server: Forwards message
Server->>Workflow: Processes workflow events
Workflow-->>Server: Emits artifactEvent
Server->>ArtifactTransform: artifactEvent
ArtifactTransform-->>Server: AgentStream event with inline annotation
Server->>ChatUI: Sends agent stream (includes inline artifact annotation)
ChatUI->>User: Renders chat message with inline artifact
Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
...es/create-llama/templates/components/use-cases/typescript/code_generator/src/app/workflow.ts
Outdated
Show resolved
Hide resolved
...es/create-llama/templates/components/use-cases/typescript/code_generator/src/app/workflow.ts
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
packages/create-llama/templates/components/use-cases/typescript/code_generator/src/app/workflow.ts (1)
1-1
: Import change aligns with centralization effort.The import of
artifactEvent
from@llamaindex/server
centralizes artifact event handling, which is consistent with the broader refactoring in this PR. However, note the previous review comment about maintaining the existing artifact event API externally while handling conversion internally.
🧹 Nitpick comments (5)
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py (1)
1-38
: Add module docstring and method documentation.The implementation correctly transforms artifact events to agent streams with inline annotations. Consider adding documentation to improve code clarity.
+""" +Callback for transforming ArtifactEvent instances into AgentStream objects with inline annotation format. +""" import logging from typing import Any🧰 Tools
🪛 Pylint (3.3.7)
[convention] 1-1: Missing module docstring
(C0114)
[error] 4-4: Unable to import 'llama_index.core.agent.workflow.workflow_events'
(E0401)
[error] 5-5: Unable to import 'llama_index.server.api.callbacks.base'
(E0401)
[error] 6-6: Unable to import 'llama_index.server.models.artifacts'
(E0401)
[error] 7-7: Unable to import 'llama_index.server.utils.inline'
(E0401)
[convention] 17-17: Missing function or method docstring
(C0116)
[convention] 36-36: Missing function or method docstring
(C0116)
[warning] 36-36: Unused argument 'args'
(W0613)
[warning] 36-36: Unused argument 'kwargs'
(W0613)
python/llama-index-server/llama_index/server/utils/inline.py (2)
1-11
: Add module docstring for better documentation.Consider adding a module docstring to explain the purpose of inline annotation utilities.
+""" +Utilities for parsing and generating inline annotations embedded in markdown content. + +Inline annotations are JSON objects wrapped in fenced code blocks with the 'annotation' +language key, used to embed structured data within chat messages. +""" import json import re from typing import Any, List🧰 Tools
🪛 Pylint (3.3.7)
[convention] 1-1: Missing module docstring
(C0114)
[error] 5-5: Unable to import 'pydantic'
(E0401)
[error] 7-7: Unable to import 'llama_index.server.models.chat'
(E0401)
22-24
: Consider using a raw string for the regex pattern.The regex pattern is correct, but using a raw string could improve readability and avoid potential escaping issues.
- annotation_regex = re.compile( - rf"```{re.escape(INLINE_ANNOTATION_KEY)}\s*\n([\s\S]*?)\n```", re.MULTILINE - ) + annotation_regex = re.compile( + rf"```{re.escape(INLINE_ANNOTATION_KEY)}\s*\n([\s\S]*?)\n```", + re.MULTILINE + )packages/server/src/utils/inline.ts (2)
6-11
: Consider making the annotation schema more restrictive.The current schema uses
z.any()
for the data field, which provides no validation on the actual annotation content. This could lead to runtime errors when processing annotations downstream.Consider defining specific schemas for known annotation types (like artifacts) or at least ensuring the data field is an object:
export const AnnotationSchema = z.object({ type: z.string(), - data: z.any(), + data: z.record(z.unknown()), // At least ensure it's an object });
63-65
: Add error handling for JSON serialization.The function assumes the input object is JSON-serializable, but
JSON.stringify
can throw for circular references or non-serializable values.export function toInlineAnnotation(item: object) { + try { return `\n\`\`\`${INLINE_ANNOTATION_KEY}\n${JSON.stringify(item)}\n\`\`\`\n`; + } catch (error) { + console.error("Failed to serialize annotation:", error); + return ""; + } }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (16)
.changeset/thick-turtles-deny.md
(1 hunks)packages/create-llama/templates/components/use-cases/typescript/code_generator/src/app/workflow.ts
(1 hunks)packages/create-llama/templates/components/use-cases/typescript/document_generator/src/app/workflow.ts
(1 hunks)packages/server/next/app/components/ui/chat/chat-message-content.tsx
(0 hunks)packages/server/package.json
(1 hunks)packages/server/project-config/package.json
(1 hunks)packages/server/src/index.ts
(1 hunks)packages/server/src/utils/events.ts
(3 hunks)packages/server/src/utils/index.ts
(1 hunks)packages/server/src/utils/inline.ts
(1 hunks)packages/server/src/utils/workflow.ts
(3 hunks)python/llama-index-server/llama_index/server/api/callbacks/__init__.py
(2 hunks)python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py
(1 hunks)python/llama-index-server/llama_index/server/api/routers/chat.py
(2 hunks)python/llama-index-server/llama_index/server/models/artifacts.py
(2 hunks)python/llama-index-server/llama_index/server/utils/inline.py
(1 hunks)
💤 Files with no reviewable changes (1)
- packages/server/next/app/components/ui/chat/chat-message-content.tsx
🧰 Additional context used
🧬 Code Graph Analysis (5)
packages/server/src/utils/workflow.ts (2)
packages/server/src/utils/events.ts (1)
artifactEvent
(121-124)packages/server/src/utils/inline.ts (1)
toInlineAnnotation
(63-65)
python/llama-index-server/llama_index/server/api/routers/chat.py (1)
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py (1)
ArtifactTransform
(12-37)
python/llama-index-server/llama_index/server/models/artifacts.py (1)
python/llama-index-server/llama_index/server/utils/inline.py (1)
get_inline_annotations
(14-50)
python/llama-index-server/llama_index/server/utils/inline.py (2)
python/llama-index-server/llama_index/server/api/routers/chat.py (1)
chat
(42-96)python/llama-index-server/llama_index/server/models/chat.py (1)
ChatAPIMessage
(9-26)
packages/server/src/utils/events.ts (1)
packages/server/src/utils/inline.ts (1)
getInlineAnnotations
(13-49)
🪛 Pylint (3.3.7)
python/llama-index-server/llama_index/server/api/callbacks/__init__.py
[error] 2-2: Unable to import 'llama_index.server.api.callbacks.artifact_transform'
(E0401)
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py
[convention] 1-1: Missing module docstring
(C0114)
[error] 4-4: Unable to import 'llama_index.core.agent.workflow.workflow_events'
(E0401)
[error] 5-5: Unable to import 'llama_index.server.api.callbacks.base'
(E0401)
[error] 6-6: Unable to import 'llama_index.server.models.artifacts'
(E0401)
[error] 7-7: Unable to import 'llama_index.server.utils.inline'
(E0401)
[convention] 17-17: Missing function or method docstring
(C0116)
[convention] 36-36: Missing function or method docstring
(C0116)
[warning] 36-36: Unused argument 'args'
(W0613)
[warning] 36-36: Unused argument 'kwargs'
(W0613)
python/llama-index-server/llama_index/server/utils/inline.py
[convention] 55-55: Line too long (117/100)
(C0301)
[convention] 1-1: Missing module docstring
(C0114)
[error] 5-5: Unable to import 'pydantic'
(E0401)
[error] 7-7: Unable to import 'llama_index.server.models.chat'
(E0401)
⏰ Context from checks skipped due to timeout of 90000ms (47)
- GitHub Check: build
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
🔇 Additional comments (23)
packages/server/project-config/package.json (1)
44-44
: Routine dependency bump approved.
Updating@llamaindex/chat-ui
from0.4.9
to0.5.2
aligns with the overall patch release.packages/server/package.json (1)
68-68
: Routine dependency bump approved.
Synchronizing@llamaindex/chat-ui
to version0.5.2
matches the client side and other related packages.packages/server/src/index.ts (1)
5-5
: Export addition looks good.
Re-exporting./utils/inline
exposes the new inline annotation utilities as intended. Ensure that downstream consumers import from the package’s root..changeset/thick-turtles-deny.md (1)
1-8
: Changeset metadata is correct.
The bump description and targeted packages align with the PR objectives.packages/server/src/utils/index.ts (1)
4-4
: LGTM! Clean module export addition.The export statement follows the existing pattern and properly exposes the new inline annotation utilities.
packages/create-llama/templates/components/use-cases/typescript/document_generator/src/app/workflow.ts (1)
1-1
:✅ Verification successful
Good refactoring to use shared artifact utilities.
Centralizing the
artifactEvent
andextractLastArtifact
imports from the shared package improves consistency across use cases.Verify that the shared implementations match the expected interfaces:
🏁 Script executed:
#!/bin/bash # Verify artifactEvent and extractLastArtifact usage patterns rg -A 3 -B 3 "artifactEvent\.with|extractLastArtifact"Length of output: 6586
Artifact utilities verified and approved
Imported
artifactEvent
andextractLastArtifact
from@llamaindex/server
correctly replace the local versions and their overloads support bothcode
anddocument
artifact types as used in your workflows. No further changes are needed.packages/server/src/utils/workflow.ts (4)
2-2
: LGTM! Necessary import for artifact stream transformation.The
agentStreamEvent
import is correctly added to support the new artifact event handling logic.
19-19
: Good consolidation using shared artifact event.Using the shared
artifactEvent
import maintains consistency with other components.
26-26
: Appropriate import for inline annotation functionality.The
toInlineAnnotation
import is necessary for the artifact-to-stream transformation.
80-89
: Well-implemented artifact event transformation.The logic correctly transforms artifact events into agent stream events with inline annotation format. The implementation:
- Properly checks for
artifactEvent
using the include method- Uses
toInlineAnnotation
to format the artifact data- Sets appropriate fields for the
agentStreamEvent
- Preserves the original artifact data in the
raw
fieldThis aligns well with the PR objectives for inline artifact support.
python/llama-index-server/llama_index/server/api/routers/chat.py (2)
20-20
: LGTM! Proper import for artifact transformation callback.The
ArtifactTransform
import is correctly added to support inline artifact processing in the chat pipeline.
76-76
: Good integration of artifact transformation callback.The
ArtifactTransform()
callback is properly added to the processing pipeline. Based on the implementation, it will transformArtifactEvent
instances intoAgentStream
objects with inline annotation format, maintaining consistency with the TypeScript side changes.python/llama-index-server/llama_index/server/models/artifacts.py (2)
8-8
: Good refactoring to centralize annotation extraction.The import of
get_inline_annotations
supports the broader refactoring to standardize inline annotation handling across the codebase.
37-39
: Maintains existing logic while leveraging centralized utility.The refactoring preserves the original artifact parsing behavior while delegating annotation extraction to the new utility function. This improves code maintainability and consistency across the system.
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py (2)
17-33
: Clean transformation logic with proper type checking.The implementation correctly checks for
ArtifactEvent
instances and transforms them toAgentStream
objects with inline annotation format. The fallback to return the original event is appropriate.🧰 Tools
🪛 Pylint (3.3.7)
[convention] 17-17: Missing function or method docstring
(C0116)
36-37
: Unused parameters are acceptable for interface compatibility.The unused
args
andkwargs
parameters infrom_default
are likely required for interface compatibility with the base class. This is a common pattern in callback frameworks.🧰 Tools
🪛 Pylint (3.3.7)
[convention] 36-36: Missing function or method docstring
(C0116)
[warning] 36-36: Unused argument 'args'
(W0613)
[warning] 36-36: Unused argument 'kwargs'
(W0613)
python/llama-index-server/llama_index/server/utils/inline.py (2)
14-50
: Robust annotation extraction with proper error handling.The regex pattern correctly matches annotation code blocks, and the JSON parsing with validation ensures only well-formed annotations are extracted. The error handling appropriately skips malformed annotations while logging issues.
53-66
: Clear documentation and correct implementation for annotation serialization.The function correctly formats annotations as markdown code blocks. The docstring provides helpful examples and explains the format clearly.
🧰 Tools
🪛 Pylint (3.3.7)
[convention] 55-55: Line too long (117/100)
(C0301)
packages/server/src/utils/inline.ts (1)
20-23
: The regex pattern handles basic cases well.The regex correctly matches annotation code blocks, but consider potential edge cases:
- Nested backticks within the JSON content could break parsing
- The pattern assumes newlines around the JSON content
The current pattern should work for standard use cases.
packages/server/src/utils/events.ts (4)
3-10
: LGTM on the import updates.The new imports correctly support the refactored artifact extraction logic using inline annotations.
156-164
: Well-structured artifact extraction function.The function correctly uses the new inline annotation utilities and employs proper type safety with schema validation. The filtering approach is clean and maintainable.
166-172
: Clean implementation with good sorting behavior.The function efficiently processes multiple messages and maintains chronological ordering of artifacts. The flatMap approach is appropriate for typical chat message volumes.
193-196
: Excellent refactoring for improved modularity.The updated function properly delegates to the new specialized extraction functions while maintaining the same API. This improves code reusability and maintainability.
python/llama-index-server/llama_index/server/api/callbacks/__init__.py
Outdated
Show resolved
Hide resolved
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py
Outdated
Show resolved
Hide resolved
python/llama-index-server/llama_index/server/models/artifacts.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
packages/server/src/utils/inline.ts
(1 hunks)python/llama-index-server/llama_index/server/models/artifacts.py
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/server/src/utils/inline.ts
🧰 Additional context used
🧬 Code Graph Analysis (1)
python/llama-index-server/llama_index/server/models/artifacts.py (1)
python/llama-index-server/llama_index/server/utils/inline.py (1)
get_inline_annotations
(14-50)
🪛 Pylint (3.3.7)
python/llama-index-server/llama_index/server/models/artifacts.py
[error] 8-8: Unable to import 'llama_index.server.utils.inline'
(E0401)
[convention] 8-8: Imports from package llama_index are not grouped
(C0412)
⏰ Context from checks skipped due to timeout of 90000ms (44)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
🔇 Additional comments (1)
python/llama-index-server/llama_index/server/models/artifacts.py (1)
37-39
: Good refactoring to centralize annotation extraction.The change from direct
message.annotations
access to usingget_inline_annotations(message)
improves maintainability by centralizing the logic for extracting inline annotations from markdown content. This aligns with the broader refactoring described in the PR objectives.The error handling and return behavior are preserved, which maintains the method's contract.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (5)
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py (3)
8-8
: Remove unused logger.The logger is imported but never used in the class.
-logger = logging.getLogger("uvicorn")
16-19
: Add method docstring for clarity.Consider adding a docstring to explain the transformation behavior.
async def run(self, event: Any) -> Any: + """Transform ArtifactEvent instances to inline annotation format.""" if isinstance(event, ArtifactEvent):
🧰 Tools
🪛 Pylint (3.3.7)
[convention] 16-16: Missing function or method docstring
(C0116)
21-23
: Simplify method signature to remove unused arguments.The
args
andkwargs
parameters are not used and can be removed for cleaner code.@classmethod -def from_default(cls, *args: Any, **kwargs: Any) -> "ArtifactTransform": +def from_default(cls) -> "ArtifactTransform": + """Create ArtifactTransform instance with default parameters.""" return cls()🧰 Tools
🪛 Pylint (3.3.7)
[convention] 22-22: Missing function or method docstring
(C0116)
[warning] 22-22: Unused argument 'args'
(W0613)
[warning] 22-22: Unused argument 'kwargs'
(W0613)
python/llama-index-server/llama_index/server/utils/inline.py (2)
1-1
: Add module docstring for better documentation.Consider adding a module docstring to document the purpose and functionality.
+""" +Utility functions for handling inline annotations in chat messages. + +This module provides functionality to extract, validate, and serialize +inline annotations embedded as JSON code blocks in markdown content. +""" import json🧰 Tools
🪛 Pylint (3.3.7)
[convention] 1-1: Missing module docstring
(C0114)
57-60
: Fix line length in docstring.The docstring exceeds the 100-character line limit.
""" -To append inline annotations to the stream, we need to wrap the annotation in a code block with the language key. -The language key is `annotation` and the code block is wrapped in backticks. -The prefix `0:` ensures it will be treated as inline markdown. Example: +To append inline annotations to the stream, we need to wrap the annotation +in a code block with the language key. The language key is `annotation` and +the code block is wrapped in backticks. The prefix `0:` ensures it will be +treated as inline markdown. Example:🧰 Tools
🪛 Pylint (3.3.7)
[convention] 57-57: Line too long (117/100)
(C0301)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
packages/server/src/utils/inline.ts
(1 hunks)packages/server/src/utils/workflow.ts
(2 hunks)python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py
(1 hunks)python/llama-index-server/llama_index/server/utils/inline.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/server/src/utils/workflow.ts
🧰 Additional context used
🪛 Pylint (3.3.7)
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py
[convention] 1-1: Missing module docstring
(C0114)
[error] 4-4: Unable to import 'llama_index.server.api.callbacks.base'
(E0401)
[error] 5-5: Unable to import 'llama_index.server.models.artifacts'
(E0401)
[error] 6-6: Unable to import 'llama_index.server.utils.inline'
(E0401)
[convention] 16-16: Missing function or method docstring
(C0116)
[convention] 22-22: Missing function or method docstring
(C0116)
[warning] 22-22: Unused argument 'args'
(W0613)
[warning] 22-22: Unused argument 'kwargs'
(W0613)
python/llama-index-server/llama_index/server/utils/inline.py
[convention] 57-57: Line too long (117/100)
(C0301)
[convention] 1-1: Missing module docstring
(C0114)
[error] 5-5: Unable to import 'pydantic'
(E0401)
[error] 7-7: Unable to import 'llama_index.core.workflow.events'
(E0401)
[error] 8-8: Unable to import 'llama_index.server.models.chat'
(E0401)
[error] 9-9: Unable to import 'llama_index.core.agent.workflow.workflow_events'
(E0401)
⏰ Context from checks skipped due to timeout of 90000ms (40)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
🔇 Additional comments (1)
packages/server/src/utils/inline.ts (1)
1-92
: LGTM! Well-structured inline annotation utility.The implementation is clean and handles the key requirements effectively:
- Proper schema validation with zod
- Robust regex pattern for annotation extraction
- Good error handling for malformed JSON
- Content concatenation fix from previous review is correctly implemented
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py
Outdated
Show resolved
Hide resolved
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
python/llama-index-server/llama_index/server/utils/inline.py (1)
50-50
: Use proper logging instead of print statement.This issue was already identified in previous reviews but hasn't been addressed yet.
🧹 Nitpick comments (2)
python/llama-index-server/llama_index/server/utils/inline.py (2)
1-13
: Add module docstring.The module is missing a docstring explaining its purpose and functionality.
+""" +Utilities for handling inline annotations embedded as fenced code blocks in markdown content. + +This module provides functions to extract, serialize, and convert inline annotations +that are embedded as JSON content within ```annotation code blocks in chat messages. +""" import json import re from typing import Any, List🧰 Tools
🪛 Pylint (3.3.7)
[convention] 1-1: Missing module docstring
(C0114)
[error] 5-5: Unable to import 'pydantic'
(E0401)
[error] 7-7: Unable to import 'llama_index.core.workflow.events'
(E0401)
[error] 8-8: Unable to import 'llama_index.server.models.chat'
(E0401)
[error] 9-9: Unable to import 'llama_index.core.agent.workflow.workflow_events'
(E0401)
57-57
: Fix line length issue.The line exceeds the 100-character limit.
- To append inline annotations to the stream, we need to wrap the annotation in a code block with the language key. + To append inline annotations to the stream, we need to wrap the annotation + in a code block with the language key.🧰 Tools
🪛 Pylint (3.3.7)
[convention] 57-57: Line too long (117/100)
(C0301)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
packages/server/src/utils/inline.ts
(1 hunks)python/llama-index-server/llama_index/server/utils/inline.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/server/src/utils/inline.ts
🧰 Additional context used
🧬 Code Graph Analysis (1)
python/llama-index-server/llama_index/server/utils/inline.py (2)
python/llama-index-server/llama_index/server/api/routers/chat.py (1)
chat
(42-96)python/llama-index-server/llama_index/server/models/chat.py (1)
ChatAPIMessage
(9-26)
🪛 Pylint (3.3.7)
python/llama-index-server/llama_index/server/utils/inline.py
[convention] 57-57: Line too long (117/100)
(C0301)
[convention] 1-1: Missing module docstring
(C0114)
[error] 5-5: Unable to import 'pydantic'
(E0401)
[error] 7-7: Unable to import 'llama_index.core.workflow.events'
(E0401)
[error] 8-8: Unable to import 'llama_index.server.models.chat'
(E0401)
[error] 9-9: Unable to import 'llama_index.core.agent.workflow.workflow_events'
(E0401)
⏰ Context from checks skipped due to timeout of 90000ms (53)
- GitHub Check: build
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
🔇 Additional comments (3)
python/llama-index-server/llama_index/server/utils/inline.py (3)
16-52
: LGTM! Solid implementation for annotation extraction.The function correctly:
- Uses appropriate regex pattern to match annotation code blocks
- Validates JSON structure and required fields
- Handles edge cases and malformed content gracefully
- Returns a clean list of parsed annotations
55-67
: LGTM! Clean serialization function.The function correctly formats annotations as markdown code blocks with proper JSON serialization and clear documentation.
🧰 Tools
🪛 Pylint (3.3.7)
[convention] 57-57: Line too long (117/100)
(C0301)
70-81
: LGTM! Proper event conversion implementation.The function correctly:
- Serializes the event using model_dump()
- Uses the inline annotation helper for consistent formatting
- Creates AgentStream with appropriate field values
- Maintains the raw event data for reference
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
python/llama-index-server/llama_index/server/api/callbacks/__init__.py (1)
2-4
: Use relative import to ensure module resolution.This follows the same import resolution issue flagged in previous reviews. Replace the absolute import with a relative import:
-from llama_index.server.api.callbacks.artifact_transform import ( - InlineAnnotationTransformer, -) +from .artifact_transform import InlineAnnotationTransformer🧰 Tools
🪛 Pylint (3.3.7)
[error] 2-4: Unable to import 'llama_index.server.api.callbacks.artifact_transform'
(E0401)
🧹 Nitpick comments (2)
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py (2)
1-9
: Add module docstring and remove unused logger.The module is missing a docstring and the logger is defined but never used in the implementation.
+""" +Callback transformer for converting artifact events to inline annotation format. +""" import logging from typing import Any from llama_index.server.api.callbacks.base import EventCallback from llama_index.server.models.artifacts import ArtifactEvent from llama_index.server.utils.inline import to_inline_annotation_event -logger = logging.getLogger("uvicorn")🧰 Tools
🪛 Pylint (3.3.7)
[convention] 1-1: Missing module docstring
(C0114)
[error] 4-4: Unable to import 'llama_index.server.api.callbacks.base'
(E0401)
[error] 5-5: Unable to import 'llama_index.server.models.artifacts'
(E0401)
[error] 6-6: Unable to import 'llama_index.server.utils.inline'
(E0401)
22-24
: Add docstring and remove unused parameters.The
from_default
method has unused parameters and lacks documentation.@classmethod -def from_default(cls, *args: Any, **kwargs: Any) -> "InlineAnnotationTransformer": +def from_default(cls) -> "InlineAnnotationTransformer": + """Create an instance with default configuration.""" return cls()🧰 Tools
🪛 Pylint (3.3.7)
[convention] 23-23: Missing function or method docstring
(C0116)
[warning] 23-23: Unused argument 'args'
(W0613)
[warning] 23-23: Unused argument 'kwargs'
(W0613)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
python/llama-index-server/llama_index/server/api/callbacks/__init__.py
(2 hunks)python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py
(1 hunks)python/llama-index-server/llama_index/server/api/routers/chat.py
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- python/llama-index-server/llama_index/server/api/routers/chat.py
🧰 Additional context used
🪛 Pylint (3.3.7)
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py
[convention] 1-1: Missing module docstring
(C0114)
[error] 4-4: Unable to import 'llama_index.server.api.callbacks.base'
(E0401)
[error] 5-5: Unable to import 'llama_index.server.models.artifacts'
(E0401)
[error] 6-6: Unable to import 'llama_index.server.utils.inline'
(E0401)
[convention] 16-16: Missing function or method docstring
(C0116)
[convention] 23-23: Missing function or method docstring
(C0116)
[warning] 23-23: Unused argument 'args'
(W0613)
[warning] 23-23: Unused argument 'kwargs'
(W0613)
python/llama-index-server/llama_index/server/api/callbacks/__init__.py
[error] 2-4: Unable to import 'llama_index.server.api.callbacks.artifact_transform'
(E0401)
⏰ Context from checks skipped due to timeout of 90000ms (45)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
🔇 Additional comments (2)
python/llama-index-server/llama_index/server/api/callbacks/__init__.py (1)
18-18
: LGTM!The addition of
InlineAnnotationTransformer
to the__all__
list correctly exports the new callback class.python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py (1)
16-21
: LGTM!The transformation logic correctly handles
ArtifactEvent
instances and passes through other events unchanged. The implementation aligns with the PR objective of supporting inline artifact annotations.🧰 Tools
🪛 Pylint (3.3.7)
[convention] 16-16: Missing function or method docstring
(C0116)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (1)
packages/server/examples/code-gen/README.md (1)
17-17
: Add missing article for better grammar.-Export OpenAI API key and start the server in dev mode. +Export the OpenAI API key and start the server in dev mode.🧰 Tools
🪛 LanguageTool
[uncategorized] ~17-~17: You might be missing the article “the” here.
Context: ..., port: 3000, }).start(); ``` Export OpenAI API key and start the server in dev mod...(AI_EN_LECTOR_MISSING_DETERMINER_THE)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (4)
packages/server/examples/code-gen/README.md
(1 hunks)packages/server/examples/code-gen/components/ui_event.jsx
(1 hunks)packages/server/examples/code-gen/index.ts
(1 hunks)packages/server/examples/code-gen/src/app/workflow.ts
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- packages/server/examples/code-gen/index.ts
🧰 Additional context used
🧬 Code Graph Analysis (1)
packages/server/examples/code-gen/src/app/workflow.ts (1)
packages/server/src/utils/events.ts (2)
extractLastArtifact
(189-216)artifactEvent
(121-124)
🪛 LanguageTool
packages/server/examples/code-gen/README.md
[uncategorized] ~17-~17: You might be missing the article “the” here.
Context: ..., port: 3000, }).start(); ``` Export OpenAI API key and start the server in dev mod...
(AI_EN_LECTOR_MISSING_DETERMINER_THE)
⏰ Context from checks skipped due to timeout of 90000ms (57)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: Unit Tests (windows-latest, 3.9)
- GitHub Check: Unit Tests (ubuntu-latest, 3.9)
- GitHub Check: lint
🔇 Additional comments (1)
packages/server/examples/code-gen/components/ui_event.jsx (1)
1-132
: Well-structured React component for workflow UI.The component demonstrates good React practices with proper state management, conditional rendering, and clear separation of concerns through the STAGE_META configuration.
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
Chores