Skip to content

feat: bump chat-ui with inline artifact #675

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 30 commits into from
Jun 5, 2025

Conversation

thucpn
Copy link
Collaborator

@thucpn thucpn commented Jun 3, 2025

Summary by CodeRabbit

  • New Features

    • Enhanced chat interface to support inline artifact annotations within messages.
    • Added artifact transformation callback to improve event handling in chat workflows.
    • Introduced a new multi-step code generation workflow with UI progress indicators and artifact event streaming.
    • Provided example server setup and UI components demonstrating the new code generation workflow.
  • Bug Fixes

    • Improved extraction and handling of inline annotations in chat messages for more reliable artifact display.
  • Refactor

    • Streamlined annotation extraction and transformation logic for better maintainability and consistency.
    • Centralized artifact event handling by importing shared event definitions.
  • Chores

    • Updated dependencies for improved chat UI performance and stability.

@thucpn thucpn requested a review from marcusschiesser June 3, 2025 03:29
Copy link

changeset-bot bot commented Jun 3, 2025

🦋 Changeset detected

Latest commit: 5c1de3c

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 4 packages
Name Type
create-llama Patch
@llamaindex/server Patch
@create-llama/llama-index-server Patch
llamaindex-server-examples Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

Copy link

coderabbitai bot commented Jun 3, 2025

Walkthrough

This change introduces inline artifact annotation support across both JavaScript/TypeScript and Python codebases. It refactors artifact event handling to use shared imports, modularizes annotation extraction and serialization, and updates chat workflows to process artifacts as inline markdown annotations. New utility modules are added for parsing and generating these annotations.

Changes

File(s) Change Summary
.changeset/thick-turtles-deny.md Adds a changeset describing a feature bump for chat UI with inline artifact support.
packages/create-llama/templates/components/use-cases/typescript/code_generator/src/app/workflow.ts Replaces local artifactEvent definition with import from @llamaindex/server.
packages/create-llama/templates/components/use-cases/typescript/document_generator/src/app/workflow.ts Same as above: imports artifactEvent from @llamaindex/server instead of local definition.
packages/server/next/app/components/ui/chat/chat-message-content.tsx Removes rendering of <ChatMessage.Content.Artifact /> from chat message content.
packages/server/package.json
packages/server/project-config/package.json
Updates @llamaindex/chat-ui dependency version from 0.4.9 to 0.5.2.
packages/server/src/index.ts
packages/server/src/utils/index.ts
Adds exports for new ./utils/inline module.
packages/server/src/utils/events.ts Refactors artifact extraction: introduces extractArtifactsFromMessage, extractArtifactsFromAllMessages, updates extractLastArtifact.
packages/server/src/utils/inline.ts Adds new utility for extracting and serializing inline annotations in markdown chat messages.
packages/server/src/utils/workflow.ts Adds handling for artifactEvent by converting it to an inline annotation and emitting as an agent stream event.
python/llama-index-server/llama_index/server/api/callbacks/init.py Exports new InlineAnnotationTransformer callback.
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py Adds InlineAnnotationTransformer callback to transform artifact events into agent stream events with inline annotations.
python/llama-index-server/llama_index/server/api/routers/chat.py Adds InlineAnnotationTransformer to the chat endpoint's callback processing pipeline.
python/llama-index-server/llama_index/server/models/artifacts.py Refactors annotation extraction in Artifact.from_message to use get_inline_annotations.
python/llama-index-server/llama_index/server/utils/inline.py Adds utility for extracting and serializing inline annotations in markdown chat messages.
packages/server/examples/code-gen/README.md Adds README with example usage of code generation workflow using LlamaIndexServer.
packages/server/examples/code-gen/components/ui_event.jsx Adds React component for rendering artifact workflow progress UI card with animated states and markdown rendering.
packages/server/examples/code-gen/index.ts Adds example server setup script using GPT-4o-mini model and LlamaIndexServer with workflow and UI config.
packages/server/examples/code-gen/src/app/workflow.ts Adds new multi-step code generation workflow with planning, code generation, and answer synthesis stages, emitting UI and artifact events.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant ChatUI
    participant Server
    participant Workflow
    participant ArtifactTransform

    User->>ChatUI: Sends message
    ChatUI->>Server: Forwards message
    Server->>Workflow: Processes workflow events
    Workflow-->>Server: Emits artifactEvent
    Server->>ArtifactTransform: artifactEvent
    ArtifactTransform-->>Server: AgentStream event with inline annotation
    Server->>ChatUI: Sends agent stream (includes inline artifact annotation)
    ChatUI->>User: Renders chat message with inline artifact
Loading

Possibly related PRs

  • run-llama/create-llama#575: Feature bump for chat UI in multiple packages including @llamaindex/server; related by chat UI version updates and artifact handling.
  • run-llama/create-llama#627: Refactors workflow factory functions and chat memory handling in TypeScript workflows, related to workflow changes here.
  • run-llama/create-llama#617: Splits artifact use cases into code and document generators and reorganizes artifact workflows; related to artifact event import refactoring.

Suggested reviewers

  • marcusschiesser

Poem

In the meadow where code blocks bloom,
Artifacts now dance in markdown's room.
Inline they hide, with JSON delight,
Chat flows stream them, morning to night.
A rabbit hops with annotation cheer—
Hooray for artifacts, now crystal clear!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@thucpn thucpn marked this pull request as ready for review June 4, 2025 05:22
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
packages/create-llama/templates/components/use-cases/typescript/code_generator/src/app/workflow.ts (1)

1-1: Import change aligns with centralization effort.

The import of artifactEvent from @llamaindex/server centralizes artifact event handling, which is consistent with the broader refactoring in this PR. However, note the previous review comment about maintaining the existing artifact event API externally while handling conversion internally.

🧹 Nitpick comments (5)
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py (1)

1-38: Add module docstring and method documentation.

The implementation correctly transforms artifact events to agent streams with inline annotations. Consider adding documentation to improve code clarity.

+"""
+Callback for transforming ArtifactEvent instances into AgentStream objects with inline annotation format.
+"""
 import logging
 from typing import Any
🧰 Tools
🪛 Pylint (3.3.7)

[convention] 1-1: Missing module docstring

(C0114)


[error] 4-4: Unable to import 'llama_index.core.agent.workflow.workflow_events'

(E0401)


[error] 5-5: Unable to import 'llama_index.server.api.callbacks.base'

(E0401)


[error] 6-6: Unable to import 'llama_index.server.models.artifacts'

(E0401)


[error] 7-7: Unable to import 'llama_index.server.utils.inline'

(E0401)


[convention] 17-17: Missing function or method docstring

(C0116)


[convention] 36-36: Missing function or method docstring

(C0116)


[warning] 36-36: Unused argument 'args'

(W0613)


[warning] 36-36: Unused argument 'kwargs'

(W0613)

python/llama-index-server/llama_index/server/utils/inline.py (2)

1-11: Add module docstring for better documentation.

Consider adding a module docstring to explain the purpose of inline annotation utilities.

+"""
+Utilities for parsing and generating inline annotations embedded in markdown content.
+
+Inline annotations are JSON objects wrapped in fenced code blocks with the 'annotation' 
+language key, used to embed structured data within chat messages.
+"""
 import json
 import re
 from typing import Any, List
🧰 Tools
🪛 Pylint (3.3.7)

[convention] 1-1: Missing module docstring

(C0114)


[error] 5-5: Unable to import 'pydantic'

(E0401)


[error] 7-7: Unable to import 'llama_index.server.models.chat'

(E0401)


22-24: Consider using a raw string for the regex pattern.

The regex pattern is correct, but using a raw string could improve readability and avoid potential escaping issues.

-    annotation_regex = re.compile(
-        rf"```{re.escape(INLINE_ANNOTATION_KEY)}\s*\n([\s\S]*?)\n```", re.MULTILINE
-    )
+    annotation_regex = re.compile(
+        rf"```{re.escape(INLINE_ANNOTATION_KEY)}\s*\n([\s\S]*?)\n```", 
+        re.MULTILINE
+    )
packages/server/src/utils/inline.ts (2)

6-11: Consider making the annotation schema more restrictive.

The current schema uses z.any() for the data field, which provides no validation on the actual annotation content. This could lead to runtime errors when processing annotations downstream.

Consider defining specific schemas for known annotation types (like artifacts) or at least ensuring the data field is an object:

export const AnnotationSchema = z.object({
  type: z.string(),
-  data: z.any(),
+  data: z.record(z.unknown()), // At least ensure it's an object
});

63-65: Add error handling for JSON serialization.

The function assumes the input object is JSON-serializable, but JSON.stringify can throw for circular references or non-serializable values.

export function toInlineAnnotation(item: object) {
+  try {
    return `\n\`\`\`${INLINE_ANNOTATION_KEY}\n${JSON.stringify(item)}\n\`\`\`\n`;
+  } catch (error) {
+    console.error("Failed to serialize annotation:", error);
+    return "";
+  }
}
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 13a967b and 980f212.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (16)
  • .changeset/thick-turtles-deny.md (1 hunks)
  • packages/create-llama/templates/components/use-cases/typescript/code_generator/src/app/workflow.ts (1 hunks)
  • packages/create-llama/templates/components/use-cases/typescript/document_generator/src/app/workflow.ts (1 hunks)
  • packages/server/next/app/components/ui/chat/chat-message-content.tsx (0 hunks)
  • packages/server/package.json (1 hunks)
  • packages/server/project-config/package.json (1 hunks)
  • packages/server/src/index.ts (1 hunks)
  • packages/server/src/utils/events.ts (3 hunks)
  • packages/server/src/utils/index.ts (1 hunks)
  • packages/server/src/utils/inline.ts (1 hunks)
  • packages/server/src/utils/workflow.ts (3 hunks)
  • python/llama-index-server/llama_index/server/api/callbacks/__init__.py (2 hunks)
  • python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py (1 hunks)
  • python/llama-index-server/llama_index/server/api/routers/chat.py (2 hunks)
  • python/llama-index-server/llama_index/server/models/artifacts.py (2 hunks)
  • python/llama-index-server/llama_index/server/utils/inline.py (1 hunks)
💤 Files with no reviewable changes (1)
  • packages/server/next/app/components/ui/chat/chat-message-content.tsx
🧰 Additional context used
🧬 Code Graph Analysis (5)
packages/server/src/utils/workflow.ts (2)
packages/server/src/utils/events.ts (1)
  • artifactEvent (121-124)
packages/server/src/utils/inline.ts (1)
  • toInlineAnnotation (63-65)
python/llama-index-server/llama_index/server/api/routers/chat.py (1)
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py (1)
  • ArtifactTransform (12-37)
python/llama-index-server/llama_index/server/models/artifacts.py (1)
python/llama-index-server/llama_index/server/utils/inline.py (1)
  • get_inline_annotations (14-50)
python/llama-index-server/llama_index/server/utils/inline.py (2)
python/llama-index-server/llama_index/server/api/routers/chat.py (1)
  • chat (42-96)
python/llama-index-server/llama_index/server/models/chat.py (1)
  • ChatAPIMessage (9-26)
packages/server/src/utils/events.ts (1)
packages/server/src/utils/inline.ts (1)
  • getInlineAnnotations (13-49)
🪛 Pylint (3.3.7)
python/llama-index-server/llama_index/server/api/callbacks/__init__.py

[error] 2-2: Unable to import 'llama_index.server.api.callbacks.artifact_transform'

(E0401)

python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py

[convention] 1-1: Missing module docstring

(C0114)


[error] 4-4: Unable to import 'llama_index.core.agent.workflow.workflow_events'

(E0401)


[error] 5-5: Unable to import 'llama_index.server.api.callbacks.base'

(E0401)


[error] 6-6: Unable to import 'llama_index.server.models.artifacts'

(E0401)


[error] 7-7: Unable to import 'llama_index.server.utils.inline'

(E0401)


[convention] 17-17: Missing function or method docstring

(C0116)


[convention] 36-36: Missing function or method docstring

(C0116)


[warning] 36-36: Unused argument 'args'

(W0613)


[warning] 36-36: Unused argument 'kwargs'

(W0613)

python/llama-index-server/llama_index/server/utils/inline.py

[convention] 55-55: Line too long (117/100)

(C0301)


[convention] 1-1: Missing module docstring

(C0114)


[error] 5-5: Unable to import 'pydantic'

(E0401)


[error] 7-7: Unable to import 'llama_index.server.models.chat'

(E0401)

⏰ Context from checks skipped due to timeout of 90000ms (47)
  • GitHub Check: build
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
🔇 Additional comments (23)
packages/server/project-config/package.json (1)

44-44: Routine dependency bump approved.
Updating @llamaindex/chat-ui from 0.4.9 to 0.5.2 aligns with the overall patch release.

packages/server/package.json (1)

68-68: Routine dependency bump approved.
Synchronizing @llamaindex/chat-ui to version 0.5.2 matches the client side and other related packages.

packages/server/src/index.ts (1)

5-5: Export addition looks good.
Re-exporting ./utils/inline exposes the new inline annotation utilities as intended. Ensure that downstream consumers import from the package’s root.

.changeset/thick-turtles-deny.md (1)

1-8: Changeset metadata is correct.
The bump description and targeted packages align with the PR objectives.

packages/server/src/utils/index.ts (1)

4-4: LGTM! Clean module export addition.

The export statement follows the existing pattern and properly exposes the new inline annotation utilities.

packages/create-llama/templates/components/use-cases/typescript/document_generator/src/app/workflow.ts (1)

1-1:

✅ Verification successful

Good refactoring to use shared artifact utilities.

Centralizing the artifactEvent and extractLastArtifact imports from the shared package improves consistency across use cases.

Verify that the shared implementations match the expected interfaces:


🏁 Script executed:

#!/bin/bash
# Verify artifactEvent and extractLastArtifact usage patterns
rg -A 3 -B 3 "artifactEvent\.with|extractLastArtifact"

Length of output: 6586


Artifact utilities verified and approved

Imported artifactEvent and extractLastArtifact from @llamaindex/server correctly replace the local versions and their overloads support both code and document artifact types as used in your workflows. No further changes are needed.

packages/server/src/utils/workflow.ts (4)

2-2: LGTM! Necessary import for artifact stream transformation.

The agentStreamEvent import is correctly added to support the new artifact event handling logic.


19-19: Good consolidation using shared artifact event.

Using the shared artifactEvent import maintains consistency with other components.


26-26: Appropriate import for inline annotation functionality.

The toInlineAnnotation import is necessary for the artifact-to-stream transformation.


80-89: Well-implemented artifact event transformation.

The logic correctly transforms artifact events into agent stream events with inline annotation format. The implementation:

  • Properly checks for artifactEvent using the include method
  • Uses toInlineAnnotation to format the artifact data
  • Sets appropriate fields for the agentStreamEvent
  • Preserves the original artifact data in the raw field

This aligns well with the PR objectives for inline artifact support.

python/llama-index-server/llama_index/server/api/routers/chat.py (2)

20-20: LGTM! Proper import for artifact transformation callback.

The ArtifactTransform import is correctly added to support inline artifact processing in the chat pipeline.


76-76: Good integration of artifact transformation callback.

The ArtifactTransform() callback is properly added to the processing pipeline. Based on the implementation, it will transform ArtifactEvent instances into AgentStream objects with inline annotation format, maintaining consistency with the TypeScript side changes.

python/llama-index-server/llama_index/server/models/artifacts.py (2)

8-8: Good refactoring to centralize annotation extraction.

The import of get_inline_annotations supports the broader refactoring to standardize inline annotation handling across the codebase.


37-39: Maintains existing logic while leveraging centralized utility.

The refactoring preserves the original artifact parsing behavior while delegating annotation extraction to the new utility function. This improves code maintainability and consistency across the system.

python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py (2)

17-33: Clean transformation logic with proper type checking.

The implementation correctly checks for ArtifactEvent instances and transforms them to AgentStream objects with inline annotation format. The fallback to return the original event is appropriate.

🧰 Tools
🪛 Pylint (3.3.7)

[convention] 17-17: Missing function or method docstring

(C0116)


36-37: Unused parameters are acceptable for interface compatibility.

The unused args and kwargs parameters in from_default are likely required for interface compatibility with the base class. This is a common pattern in callback frameworks.

🧰 Tools
🪛 Pylint (3.3.7)

[convention] 36-36: Missing function or method docstring

(C0116)


[warning] 36-36: Unused argument 'args'

(W0613)


[warning] 36-36: Unused argument 'kwargs'

(W0613)

python/llama-index-server/llama_index/server/utils/inline.py (2)

14-50: Robust annotation extraction with proper error handling.

The regex pattern correctly matches annotation code blocks, and the JSON parsing with validation ensures only well-formed annotations are extracted. The error handling appropriately skips malformed annotations while logging issues.


53-66: Clear documentation and correct implementation for annotation serialization.

The function correctly formats annotations as markdown code blocks. The docstring provides helpful examples and explains the format clearly.

🧰 Tools
🪛 Pylint (3.3.7)

[convention] 55-55: Line too long (117/100)

(C0301)

packages/server/src/utils/inline.ts (1)

20-23: The regex pattern handles basic cases well.

The regex correctly matches annotation code blocks, but consider potential edge cases:

  • Nested backticks within the JSON content could break parsing
  • The pattern assumes newlines around the JSON content

The current pattern should work for standard use cases.

packages/server/src/utils/events.ts (4)

3-10: LGTM on the import updates.

The new imports correctly support the refactored artifact extraction logic using inline annotations.


156-164: Well-structured artifact extraction function.

The function correctly uses the new inline annotation utilities and employs proper type safety with schema validation. The filtering approach is clean and maintainable.


166-172: Clean implementation with good sorting behavior.

The function efficiently processes multiple messages and maintains chronological ordering of artifacts. The flatMap approach is appropriate for typical chat message volumes.


193-196: Excellent refactoring for improved modularity.

The updated function properly delegates to the new specialized extraction functions while maintaining the same API. This improves code reusability and maintainability.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 980f212 and cded3ae.

📒 Files selected for processing (2)
  • packages/server/src/utils/inline.ts (1 hunks)
  • python/llama-index-server/llama_index/server/models/artifacts.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/server/src/utils/inline.ts
🧰 Additional context used
🧬 Code Graph Analysis (1)
python/llama-index-server/llama_index/server/models/artifacts.py (1)
python/llama-index-server/llama_index/server/utils/inline.py (1)
  • get_inline_annotations (14-50)
🪛 Pylint (3.3.7)
python/llama-index-server/llama_index/server/models/artifacts.py

[error] 8-8: Unable to import 'llama_index.server.utils.inline'

(E0401)


[convention] 8-8: Imports from package llama_index are not grouped

(C0412)

⏰ Context from checks skipped due to timeout of 90000ms (44)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
🔇 Additional comments (1)
python/llama-index-server/llama_index/server/models/artifacts.py (1)

37-39: Good refactoring to centralize annotation extraction.

The change from direct message.annotations access to using get_inline_annotations(message) improves maintainability by centralizing the logic for extracting inline annotations from markdown content. This aligns with the broader refactoring described in the PR objectives.

The error handling and return behavior are preserved, which maintains the method's contract.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py (3)

8-8: Remove unused logger.

The logger is imported but never used in the class.

-logger = logging.getLogger("uvicorn")

16-19: Add method docstring for clarity.

Consider adding a docstring to explain the transformation behavior.

 async def run(self, event: Any) -> Any:
+    """Transform ArtifactEvent instances to inline annotation format."""
     if isinstance(event, ArtifactEvent):
🧰 Tools
🪛 Pylint (3.3.7)

[convention] 16-16: Missing function or method docstring

(C0116)


21-23: Simplify method signature to remove unused arguments.

The args and kwargs parameters are not used and can be removed for cleaner code.

 @classmethod
-def from_default(cls, *args: Any, **kwargs: Any) -> "ArtifactTransform":
+def from_default(cls) -> "ArtifactTransform":
+    """Create ArtifactTransform instance with default parameters."""
     return cls()
🧰 Tools
🪛 Pylint (3.3.7)

[convention] 22-22: Missing function or method docstring

(C0116)


[warning] 22-22: Unused argument 'args'

(W0613)


[warning] 22-22: Unused argument 'kwargs'

(W0613)

python/llama-index-server/llama_index/server/utils/inline.py (2)

1-1: Add module docstring for better documentation.

Consider adding a module docstring to document the purpose and functionality.

+"""
+Utility functions for handling inline annotations in chat messages.
+
+This module provides functionality to extract, validate, and serialize
+inline annotations embedded as JSON code blocks in markdown content.
+"""
 import json
🧰 Tools
🪛 Pylint (3.3.7)

[convention] 1-1: Missing module docstring

(C0114)


57-60: Fix line length in docstring.

The docstring exceeds the 100-character line limit.

 """
-To append inline annotations to the stream, we need to wrap the annotation in a code block with the language key.
-The language key is `annotation` and the code block is wrapped in backticks.
-The prefix `0:` ensures it will be treated as inline markdown. Example:
+To append inline annotations to the stream, we need to wrap the annotation
+in a code block with the language key. The language key is `annotation` and
+the code block is wrapped in backticks. The prefix `0:` ensures it will be
+treated as inline markdown. Example:
🧰 Tools
🪛 Pylint (3.3.7)

[convention] 57-57: Line too long (117/100)

(C0301)

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cded3ae and eff7631.

📒 Files selected for processing (4)
  • packages/server/src/utils/inline.ts (1 hunks)
  • packages/server/src/utils/workflow.ts (2 hunks)
  • python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py (1 hunks)
  • python/llama-index-server/llama_index/server/utils/inline.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/server/src/utils/workflow.ts
🧰 Additional context used
🪛 Pylint (3.3.7)
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py

[convention] 1-1: Missing module docstring

(C0114)


[error] 4-4: Unable to import 'llama_index.server.api.callbacks.base'

(E0401)


[error] 5-5: Unable to import 'llama_index.server.models.artifacts'

(E0401)


[error] 6-6: Unable to import 'llama_index.server.utils.inline'

(E0401)


[convention] 16-16: Missing function or method docstring

(C0116)


[convention] 22-22: Missing function or method docstring

(C0116)


[warning] 22-22: Unused argument 'args'

(W0613)


[warning] 22-22: Unused argument 'kwargs'

(W0613)

python/llama-index-server/llama_index/server/utils/inline.py

[convention] 57-57: Line too long (117/100)

(C0301)


[convention] 1-1: Missing module docstring

(C0114)


[error] 5-5: Unable to import 'pydantic'

(E0401)


[error] 7-7: Unable to import 'llama_index.core.workflow.events'

(E0401)


[error] 8-8: Unable to import 'llama_index.server.models.chat'

(E0401)


[error] 9-9: Unable to import 'llama_index.core.agent.workflow.workflow_events'

(E0401)

⏰ Context from checks skipped due to timeout of 90000ms (40)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
🔇 Additional comments (1)
packages/server/src/utils/inline.ts (1)

1-92: LGTM! Well-structured inline annotation utility.

The implementation is clean and handles the key requirements effectively:

  • Proper schema validation with zod
  • Robust regex pattern for annotation extraction
  • Good error handling for malformed JSON
  • Content concatenation fix from previous review is correctly implemented

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
python/llama-index-server/llama_index/server/utils/inline.py (1)

50-50: Use proper logging instead of print statement.

This issue was already identified in previous reviews but hasn't been addressed yet.

🧹 Nitpick comments (2)
python/llama-index-server/llama_index/server/utils/inline.py (2)

1-13: Add module docstring.

The module is missing a docstring explaining its purpose and functionality.

+"""
+Utilities for handling inline annotations embedded as fenced code blocks in markdown content.
+
+This module provides functions to extract, serialize, and convert inline annotations
+that are embedded as JSON content within ```annotation code blocks in chat messages.
+"""
 import json
 import re
 from typing import Any, List
🧰 Tools
🪛 Pylint (3.3.7)

[convention] 1-1: Missing module docstring

(C0114)


[error] 5-5: Unable to import 'pydantic'

(E0401)


[error] 7-7: Unable to import 'llama_index.core.workflow.events'

(E0401)


[error] 8-8: Unable to import 'llama_index.server.models.chat'

(E0401)


[error] 9-9: Unable to import 'llama_index.core.agent.workflow.workflow_events'

(E0401)


57-57: Fix line length issue.

The line exceeds the 100-character limit.

-    To append inline annotations to the stream, we need to wrap the annotation in a code block with the language key.
+    To append inline annotations to the stream, we need to wrap the annotation 
+    in a code block with the language key.
🧰 Tools
🪛 Pylint (3.3.7)

[convention] 57-57: Line too long (117/100)

(C0301)

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between eff7631 and e20f198.

📒 Files selected for processing (2)
  • packages/server/src/utils/inline.ts (1 hunks)
  • python/llama-index-server/llama_index/server/utils/inline.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/server/src/utils/inline.ts
🧰 Additional context used
🧬 Code Graph Analysis (1)
python/llama-index-server/llama_index/server/utils/inline.py (2)
python/llama-index-server/llama_index/server/api/routers/chat.py (1)
  • chat (42-96)
python/llama-index-server/llama_index/server/models/chat.py (1)
  • ChatAPIMessage (9-26)
🪛 Pylint (3.3.7)
python/llama-index-server/llama_index/server/utils/inline.py

[convention] 57-57: Line too long (117/100)

(C0301)


[convention] 1-1: Missing module docstring

(C0114)


[error] 5-5: Unable to import 'pydantic'

(E0401)


[error] 7-7: Unable to import 'llama_index.core.workflow.events'

(E0401)


[error] 8-8: Unable to import 'llama_index.server.models.chat'

(E0401)


[error] 9-9: Unable to import 'llama_index.core.agent.workflow.workflow_events'

(E0401)

⏰ Context from checks skipped due to timeout of 90000ms (53)
  • GitHub Check: build
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
🔇 Additional comments (3)
python/llama-index-server/llama_index/server/utils/inline.py (3)

16-52: LGTM! Solid implementation for annotation extraction.

The function correctly:

  • Uses appropriate regex pattern to match annotation code blocks
  • Validates JSON structure and required fields
  • Handles edge cases and malformed content gracefully
  • Returns a clean list of parsed annotations

55-67: LGTM! Clean serialization function.

The function correctly formats annotations as markdown code blocks with proper JSON serialization and clear documentation.

🧰 Tools
🪛 Pylint (3.3.7)

[convention] 57-57: Line too long (117/100)

(C0301)


70-81: LGTM! Proper event conversion implementation.

The function correctly:

  • Serializes the event using model_dump()
  • Uses the inline annotation helper for consistent formatting
  • Creates AgentStream with appropriate field values
  • Maintains the raw event data for reference

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
python/llama-index-server/llama_index/server/api/callbacks/__init__.py (1)

2-4: Use relative import to ensure module resolution.

This follows the same import resolution issue flagged in previous reviews. Replace the absolute import with a relative import:

-from llama_index.server.api.callbacks.artifact_transform import (
-    InlineAnnotationTransformer,
-)
+from .artifact_transform import InlineAnnotationTransformer
🧰 Tools
🪛 Pylint (3.3.7)

[error] 2-4: Unable to import 'llama_index.server.api.callbacks.artifact_transform'

(E0401)

🧹 Nitpick comments (2)
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py (2)

1-9: Add module docstring and remove unused logger.

The module is missing a docstring and the logger is defined but never used in the implementation.

+"""
+Callback transformer for converting artifact events to inline annotation format.
+"""
 import logging
 from typing import Any

 from llama_index.server.api.callbacks.base import EventCallback
 from llama_index.server.models.artifacts import ArtifactEvent
 from llama_index.server.utils.inline import to_inline_annotation_event

-logger = logging.getLogger("uvicorn")
🧰 Tools
🪛 Pylint (3.3.7)

[convention] 1-1: Missing module docstring

(C0114)


[error] 4-4: Unable to import 'llama_index.server.api.callbacks.base'

(E0401)


[error] 5-5: Unable to import 'llama_index.server.models.artifacts'

(E0401)


[error] 6-6: Unable to import 'llama_index.server.utils.inline'

(E0401)


22-24: Add docstring and remove unused parameters.

The from_default method has unused parameters and lacks documentation.

 @classmethod
-def from_default(cls, *args: Any, **kwargs: Any) -> "InlineAnnotationTransformer":
+def from_default(cls) -> "InlineAnnotationTransformer":
+    """Create an instance with default configuration."""
     return cls()
🧰 Tools
🪛 Pylint (3.3.7)

[convention] 23-23: Missing function or method docstring

(C0116)


[warning] 23-23: Unused argument 'args'

(W0613)


[warning] 23-23: Unused argument 'kwargs'

(W0613)

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e20f198 and 4dca89d.

📒 Files selected for processing (3)
  • python/llama-index-server/llama_index/server/api/callbacks/__init__.py (2 hunks)
  • python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py (1 hunks)
  • python/llama-index-server/llama_index/server/api/routers/chat.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • python/llama-index-server/llama_index/server/api/routers/chat.py
🧰 Additional context used
🪛 Pylint (3.3.7)
python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py

[convention] 1-1: Missing module docstring

(C0114)


[error] 4-4: Unable to import 'llama_index.server.api.callbacks.base'

(E0401)


[error] 5-5: Unable to import 'llama_index.server.models.artifacts'

(E0401)


[error] 6-6: Unable to import 'llama_index.server.utils.inline'

(E0401)


[convention] 16-16: Missing function or method docstring

(C0116)


[convention] 23-23: Missing function or method docstring

(C0116)


[warning] 23-23: Unused argument 'args'

(W0613)


[warning] 23-23: Unused argument 'kwargs'

(W0613)

python/llama-index-server/llama_index/server/api/callbacks/__init__.py

[error] 2-4: Unable to import 'llama_index.server.api.callbacks.artifact_transform'

(E0401)

⏰ Context from checks skipped due to timeout of 90000ms (45)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
🔇 Additional comments (2)
python/llama-index-server/llama_index/server/api/callbacks/__init__.py (1)

18-18: LGTM!

The addition of InlineAnnotationTransformer to the __all__ list correctly exports the new callback class.

python/llama-index-server/llama_index/server/api/callbacks/artifact_transform.py (1)

16-21: LGTM!

The transformation logic correctly handles ArtifactEvent instances and passes through other events unchanged. The implementation aligns with the PR objective of supporting inline artifact annotations.

🧰 Tools
🪛 Pylint (3.3.7)

[convention] 16-16: Missing function or method docstring

(C0116)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
packages/server/examples/code-gen/README.md (1)

17-17: Add missing article for better grammar.

-Export OpenAI API key and start the server in dev mode.
+Export the OpenAI API key and start the server in dev mode.
🧰 Tools
🪛 LanguageTool

[uncategorized] ~17-~17: You might be missing the article “the” here.
Context: ..., port: 3000, }).start(); ``` Export OpenAI API key and start the server in dev mod...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4dca89d and 5c1de3c.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (4)
  • packages/server/examples/code-gen/README.md (1 hunks)
  • packages/server/examples/code-gen/components/ui_event.jsx (1 hunks)
  • packages/server/examples/code-gen/index.ts (1 hunks)
  • packages/server/examples/code-gen/src/app/workflow.ts (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • packages/server/examples/code-gen/index.ts
🧰 Additional context used
🧬 Code Graph Analysis (1)
packages/server/examples/code-gen/src/app/workflow.ts (1)
packages/server/src/utils/events.ts (2)
  • extractLastArtifact (189-216)
  • artifactEvent (121-124)
🪛 LanguageTool
packages/server/examples/code-gen/README.md

[uncategorized] ~17-~17: You might be missing the article “the” here.
Context: ..., port: 3000, }).start(); ``` Export OpenAI API key and start the server in dev mod...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)

⏰ Context from checks skipped due to timeout of 90000ms (57)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: Unit Tests (windows-latest, 3.9)
  • GitHub Check: Unit Tests (ubuntu-latest, 3.9)
  • GitHub Check: lint
🔇 Additional comments (1)
packages/server/examples/code-gen/components/ui_event.jsx (1)

1-132: Well-structured React component for workflow UI.

The component demonstrates good React practices with proper state management, conditional rendering, and clear separation of concerns through the STAGE_META configuration.

@thucpn thucpn merged commit a543a27 into main Jun 5, 2025
60 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants