-
Notifications
You must be signed in to change notification settings - Fork 433
[TE/Topology] Enhance PCI distance calculation by considering NUMA node affinity #1086
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…de proximity Signed-off-by: popsiclexu <[email protected]> Signed-off-by: zhenxue.xu <[email protected]>
Summary of ChangesHello @popsiclexu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a critical performance bottleneck identified during sglang testing, where cross-NUMA communication between HCA devices and GPUs for KV cache data resulted in substantial latency. The core change involves refining the PCI distance calculation to account for NUMA node affinity, thereby prioritizing data paths within the same NUMA node. This adjustment ensures that the system correctly identifies and utilizes the most efficient communication routes, mitigating performance degradation caused by non-uniform memory access. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request enhances the PCI distance calculation by taking NUMA node affinity into account. This is a good improvement to address performance degradation when using HCAs on different NUMA nodes from the GPU. The implementation correctly reads the NUMA node information from sysfs. I've added one comment regarding a potential path truncation issue and code duplication, with a suggestion to make the code safer and more maintainable.
| snprintf(numa_path, sizeof(numa_path), "%s/numa_node", path1); | ||
| std::ifstream(numa_path) >> numa1; | ||
| snprintf(numa_path, sizeof(numa_path), "%s/numa_node", path2); | ||
| std::ifstream(numa_path) >> numa2; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The snprintf calls to construct the numa_node path could result in a truncated path if the source path (path1 or path2) is very long. This would cause the file read to fail silently, and the NUMA affinity optimization would not be applied. It's safer to check the return value of snprintf to detect and handle potential truncation.
Additionally, the logic to read the NUMA node is duplicated. While the suggestion below fixes the immediate safety issue, I'd recommend refactoring this logic into a helper function to improve code maintainability and readability in a follow-up.
| snprintf(numa_path, sizeof(numa_path), "%s/numa_node", path1); | |
| std::ifstream(numa_path) >> numa1; | |
| snprintf(numa_path, sizeof(numa_path), "%s/numa_node", path2); | |
| std::ifstream(numa_path) >> numa2; | |
| int ret = snprintf(numa_path, sizeof(numa_path), "%s/numa_node", path1); | |
| if (ret > 0 && static_cast<size_t>(ret) < sizeof(numa_path)) { | |
| std::ifstream(numa_path) >> numa1; | |
| } | |
| ret = snprintf(numa_path, sizeof(numa_path), "%s/numa_node", path2); | |
| if (ret > 0 && static_cast<size_t>(ret) < sizeof(numa_path)) { | |
| std::ifstream(numa_path) >> numa2; | |
| } |
Description
During performance testing in sglang, we identified a significant performance degradation when using HCA devices that are not on the same NUMA node as the target GPU for transmitting KV cache data.
Given a topology like:
The current PCI distance calculation method incorrectly identifies both mlx5_0 and mlx5_1 as preferred HCAs for GPU0. In reality, mlx5_1 and GPU0 reside on different NUMA nodes (L#1 vs L#0), leading to 3x-5x increased transmission latency under high concurrency when cross-NUMA HCA devices are used.
This PR enhances the PCI distance calculation method to incorporate NUMA node affinity.
Type of Change
How Has This Been Tested?
Output comparison from
transfer_engine_topology_dump:Before:
After:
Checklist