Skip to content

Conversation

@popsiclexu
Copy link
Contributor

@popsiclexu popsiclexu commented Nov 20, 2025

Description

During performance testing in sglang, we identified a significant performance degradation when using HCA devices that are not on the same NUMA node as the target GPU for transmitting KV cache data.

Given a topology like:

Machine
  Package L#0
    NUMANode L#0 
    HostBridge
      PCIBridge
        PCI xx:xx.x (Ethernet)
          OpenFabrics "mlx5_0"
    HostBridge
	  PCIBridge
		PCI xx:xx.x (GPU0)
  Package L#1
    NUMANode L#1 
    HostBridge
      PCIBridge
        PCI xx:xx.x (Ethernet)
          OpenFabrics "mlx5_1"
    HostBridge
	  PCIBridge
		PCI xx:xx.x (GPU1)

The current PCI distance calculation method incorrectly identifies both mlx5_0 and mlx5_1 as preferred HCAs for GPU0. In reality, mlx5_1 and GPU0 reside on different NUMA nodes (L#1 vs L#0), leading to 3x-5x increased transmission latency under high concurrency when cross-NUMA HCA devices are used.
This PR enhances the PCI distance calculation method to incorporate NUMA node affinity.

Type of Change

  • Types
    • Bug fix
    • New feature
      • Transfer Engine
      • Mooncake Store
      • Mooncake EP
      • Integration
      • P2P Store
      • Python Wheel
    • Breaking change
    • CI/CD
    • Documentation update
    • Other

How Has This Been Tested?

Output comparison from transfer_engine_topology_dump:
Before:

Local topology:  {
        "cpu:0" : 
        [
                [
                        "mlx5_0"
      
                ],
                [
                        "mlx5_1"
                ]
        ],
        "cpu:1" : 
        [
                [
                        "mlx5_1"      
                ],
                [
                        "mlx5_0"
                ]
        ],
        "musa:0" : 
        [
                [
                        "mlx5_0",
						"mlx5_1"
                ]
                []
        ],
        "musa:1" : 
        [
                [
                        "mlx5_0",
                        "mlx5_1"
                ],
                []
        ]
}

After:

Local topology:  {
        "cpu:0" : 
        [
                [
                        "mlx5_0"
      
                ],
                [
                        "mlx5_1"
                ]
        ],
        "cpu:1" : 
        [
                [
                        "mlx5_1"      
                ],
                [
                        "mlx5_0"
                ]
        ],
        "musa:0" : 
        [
                [
                        "mlx5_0"
			
                ]
                [       
                         "mlx5_1"
                ]
        ],
        "musa:1" : 
        [
                [
                        "mlx5_1"
                ],
                [
                        "mlx5_0"
                ]
        ]
}

Checklist

  • I have performed a self-review of my own code.
  • I have updated the documentation.
  • I have added tests to prove my changes are effective.

…de proximity

Signed-off-by: popsiclexu <[email protected]>
Signed-off-by: zhenxue.xu <[email protected]>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @popsiclexu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical performance bottleneck identified during sglang testing, where cross-NUMA communication between HCA devices and GPUs for KV cache data resulted in substantial latency. The core change involves refining the PCI distance calculation to account for NUMA node affinity, thereby prioritizing data paths within the same NUMA node. This adjustment ensures that the system correctly identifies and utilizes the most efficient communication routes, mitigating performance degradation caused by non-uniform memory access.

Highlights

  • Performance Degradation Fix: Addresses a significant performance degradation observed in sglang when HCA devices are not on the same NUMA node as the target GPU for KV cache data transmission, leading to 3x-5x increased latency.
  • Enhanced PCI Distance Calculation: The PCI distance calculation method has been enhanced to incorporate NUMA node affinity, ensuring that devices residing on the same NUMA node are considered 'closer' in the topology.
  • NUMA Node Affinity Integration: Introduces logic within the getPciDistance function to read NUMA node information for PCI devices and reduce their calculated distance by one if they share the same valid NUMA node and the initial distance is greater than zero.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enhances the PCI distance calculation by taking NUMA node affinity into account. This is a good improvement to address performance degradation when using HCAs on different NUMA nodes from the GPU. The implementation correctly reads the NUMA node information from sysfs. I've added one comment regarding a potential path truncation issue and code duplication, with a suggestion to make the code safer and more maintainable.

Comment on lines +162 to +165
snprintf(numa_path, sizeof(numa_path), "%s/numa_node", path1);
std::ifstream(numa_path) >> numa1;
snprintf(numa_path, sizeof(numa_path), "%s/numa_node", path2);
std::ifstream(numa_path) >> numa2;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The snprintf calls to construct the numa_node path could result in a truncated path if the source path (path1 or path2) is very long. This would cause the file read to fail silently, and the NUMA affinity optimization would not be applied. It's safer to check the return value of snprintf to detect and handle potential truncation.

Additionally, the logic to read the NUMA node is duplicated. While the suggestion below fixes the immediate safety issue, I'd recommend refactoring this logic into a helper function to improve code maintainability and readability in a follow-up.

Suggested change
snprintf(numa_path, sizeof(numa_path), "%s/numa_node", path1);
std::ifstream(numa_path) >> numa1;
snprintf(numa_path, sizeof(numa_path), "%s/numa_node", path2);
std::ifstream(numa_path) >> numa2;
int ret = snprintf(numa_path, sizeof(numa_path), "%s/numa_node", path1);
if (ret > 0 && static_cast<size_t>(ret) < sizeof(numa_path)) {
std::ifstream(numa_path) >> numa1;
}
ret = snprintf(numa_path, sizeof(numa_path), "%s/numa_node", path2);
if (ret > 0 && static_cast<size_t>(ret) < sizeof(numa_path)) {
std::ifstream(numa_path) >> numa2;
}

@popsiclexu popsiclexu changed the title [TE/Topology] Enhance PCI distance calculation by considering NUMA no… [TE/Topology] Enhance PCI distance calculation by considering NUMA node affinity Nov 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants