Skip to content

Conversation

@ZeldaHuang
Copy link

Description

Adapt mooncake_connector_v1.py to vllm v0.11.1 (latest release)

Type of Change

  • Types
    • Bug fix
    • New feature
      • Transfer Engine
      • Mooncake Store
      • Mooncake EP
      • Integration
      • P2P Store
      • Python Wheel
    • Breaking change
    • CI/CD
    • Documentation update
    • Other

How Has This Been Tested?

Checklist

  • I have performed a self-review of my own code.
  • I have updated the documentation.
  • I have added tests to prove my changes are effective.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ZeldaHuang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request implements necessary bug fixes to adapt the mooncake_connector_v1.py file for compatibility with vllm version v0.11.1. The changes primarily involve updating API calls and internal logic to reflect modifications in the vllm library, ensuring the connector functions correctly with the newer version.

Highlights

  • Import Refactoring: Updated import statements for vllm.attention.selector and vllm.utils to align with the latest vllm release, specifically changing backend_name_to_enum to AttentionBackendEnum and moving network_utils functions.
  • Base Class Initialization: Added a call to super().__init__ in the MooncakeConnector constructor, ensuring proper initialization of the KVConnectorBase_V1 base class.
  • Attention Backend Logic Update: Modified the logic for identifying and handling different attention backends (FlashInfer, Pallas) to match the new AttentionBackendEnum structure and removed the deprecated is_attention_free parameter from the get_attn_backend call.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to adapt mooncake_connector_v1.py for compatibility with the latest vllm release. The changes primarily involve updating import paths and adjusting to API modifications in vllm. While most of the adaptations seem correct, I've identified a critical issue in the handling of the attention backend enum that would lead to a runtime error during initialization. My review provides a specific code suggestion to resolve this problem.

attn_backend = backend_name_to_enum(self.backend_name)
self._use_flashinfer = attn_backend == _Backend.FLASHINFER_VLLM_V1
self._use_pallas_v1 = attn_backend == _Backend.PALLAS_VLLM_V1
attn_backend = AttentionBackendEnum[self.backend_name]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The current implementation AttentionBackendEnum[self.backend_name] attempts to access an enum member by its name. However, self.backend_name (e.g., 'flashinfer') is the value of the enum member, not its name (e.g., 'FLASHINFER'). This will cause a KeyError at runtime, preventing the worker from initializing.

To correctly get the enum member from its value, you should call the enum as a function: AttentionBackendEnum(self.backend_name).

Suggested change
attn_backend = AttentionBackendEnum[self.backend_name]
attn_backend = AttentionBackendEnum(self.backend_name)

@dtcccc
Copy link
Contributor

dtcccc commented Nov 19, 2025

Can we introduce something like version check to switch?

@ZeldaHuang
Copy link
Author

@dtcccc Maintaining mooncake_connector_v1 in vllm repo is a better way to keep compatibility, Will this PR be merged? vllm-project/vllm#24718. Or can we use mooncake-transfer-engine as a nixl plugin in nixl connector?

@dtcccc
Copy link
Contributor

dtcccc commented Nov 21, 2025

@dtcccc Maintaining mooncake_connector_v1 in vllm repo is a better way to keep compatibility, Will this PR be merged?

We are currently working on this.

@xiaguan
Copy link
Collaborator

xiaguan commented Nov 21, 2025

We should maintain version compatibility—maybe using runtime if checks in the code? If that’s not feasible, it’s fine, but if vLLM already has a stable release (e.g., v0.11.0), we need to document the minimum required version clearly.

@stmatengss
Copy link
Collaborator

We should maintain version compatibility—maybe using runtime if checks in the code? If that’s not feasible, it’s fine, but if vLLM already has a stable release (e.g., v0.11.0), we need to document the minimum required version clearly.

Sure. Could you provide a stable version? So we can update the document. @ZeldaHuang

@ZeldaHuang
Copy link
Author

@stmatengss I have tested 1P1D Qwen3-235B-A22B with vllm release v0.11.1 and v0.11.2, previous connector maybe work with v0.11.0(haven't tested)

@stmatengss
Copy link
Collaborator

@stmatengss I have tested 1P1D Qwen3-235B-A22B with vllm release v0.11.1 and v0.11.2, previous connector maybe work with v0.11.0(haven't tested)

Should we keep both connectors? The current one supports v0.11.1/2, while the old one supports v0.11.0. Or this PR can support v0.11.0/1/2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants