Skip to content

Conversation

@WANDY666
Copy link
Contributor

@WANDY666 WANDY666 commented Jan 8, 2026

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @WANDY666, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the OpenAI API integration by introducing a new CharacterMessage type for character-based interactions and enhancing the ChatCompletionRequest model with Pydantic ConfigDict for improved field handling and aliasing. It also makes message role access more robust and incorporates a system_instruction into the prompt construction, aiming to provide greater flexibility and stability in API interactions.

Highlights

  • New Message Type: Introduced a CharacterMessage Pydantic model to support character-based chat messages, allowing roles to be inferred from names.
  • Pydantic Configuration Enhancements: Updated ChatCompletionRequest to use ConfigDict(populate_by_name=True) for flexible field population and aliased role_settings to role_setting for better compatibility.
  • Robust Role Access: Modified _get_history_tool_calls_cnt to safely access message roles using getattr, preventing errors if the role attribute is missing.
  • Prompt Building Update: Integrated system_instruction into the prompt building logic, allowing for more nuanced control over model behavior.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the OpenAI API compatibility by introducing a CharacterMessage model for character-based chats, adding alias support for role_setting, and making the code more robust when handling different message types. My review focuses on an incomplete feature addition for system_instruction, which is currently non-functional as the corresponding field is missing from the request model.

kwargs = {
"conversation": messages,
# 假设 request 对象里有这个字段,或者你想传空
"system_instruction": getattr(request, "system_instruction", ""),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The code attempts to access request.system_instruction, but this field is not defined in the ChatCompletionRequest model in api_models.py. This will always result in an empty string "" being used due to getattr, making this new parameter ineffective.

To properly implement this feature, you should add system_instruction as an optional field to the ChatCompletionRequest model in lightllm/server/api_models.py.

For example:

# In lightllm/server/api_models.py
class ChatCompletionRequest(BaseModel):
    # ...
    messages: List[ChatCompletionMessageParam]
    system_instruction: Optional[str] = None
    # ...

Additionally, the Chinese comment # 假设 request 对象里有这个字段,或者你想传空 is informal. It would be better to remove it once the feature is fully implemented, or replace it with a formal English comment explaining the purpose of system_instruction.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants