-
Notifications
You must be signed in to change notification settings - Fork 293
stage2 tuning for 4090D and 5090 #1167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @WANDY666, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on enhancing the performance of the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds kernel tuning configurations for NVIDIA GeForce RTX 4090D and 5090 GPUs. While adding these configurations is a good step for performance, I have identified two main issues. First, the configuration files for bfloat16 seem to be duplicates of the float16 files, which is likely incorrect and could lead to suboptimal performance. It's recommended to either perform separate tuning for bfloat16 or remove these files. Second, all the new JSON files are unformatted, which harms readability and maintainability. I've suggested formatting them for consistency and easier reviews in the future. Additionally, the very long filenames could potentially cause issues on some filesystems like Windows due to path length limitations.
| @@ -0,0 +1 @@ | |||
| {"4096": {"8": {"BLOCK_N": 16, "num_warps": 4, "num_stages": 1}, "32": {"BLOCK_N": 16, "num_warps": 4, "num_stages": 1}, "128": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}, "256": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}}, "8192": {"8": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}, "32": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}, "128": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}, "256": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}}} No newline at end of file | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This configuration file for bfloat16 is identical to its float16 counterpart. Based on the tuning script test/kernel/llama_gqa_diverse_decode_stage1_tuning.py, it appears tuning was only performed for torch.half (float16), and the results were then copied for both float16 and bfloat16. This means the bfloat16 configurations are likely not optimal and could lead to performance degradation. It is recommended to either run the tuning specifically for bfloat16 and update these files, or remove the bfloat16 configuration files for now to avoid confusion and potential performance issues. This applies to all bfloat16 config files in this PR.
| @@ -0,0 +1 @@ | |||
| {"4096": {"8": {"BLOCK_N": 16, "num_warps": 4, "num_stages": 1}, "32": {"BLOCK_N": 16, "num_warps": 4, "num_stages": 1}, "128": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}, "256": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}}, "8192": {"8": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}, "32": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}, "128": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}, "256": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}}} No newline at end of file | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This JSON configuration file is not formatted, which makes it difficult to read and review. For better maintainability, please format the JSON content with indentation and add a newline at the end of the file. This should be applied to all new JSON files in this pull request.
{
"4096": {
"8": {
"BLOCK_N": 16,
"num_warps": 4,
"num_stages": 1
},
"32": {
"BLOCK_N": 16,
"num_warps": 4,
"num_stages": 1
},
"128": {
"BLOCK_N": 16,
"num_warps": 2,
"num_stages": 1
},
"256": {
"BLOCK_N": 16,
"num_warps": 2,
"num_stages": 1
}
},
"8192": {
"8": {
"BLOCK_N": 16,
"num_warps": 2,
"num_stages": 1
},
"32": {
"BLOCK_N": 16,
"num_warps": 2,
"num_stages": 1
},
"128": {
"BLOCK_N": 16,
"num_warps": 2,
"num_stages": 1
},
"256": {
"BLOCK_N": 16,
"num_warps": 2,
"num_stages": 1
}
}
}
No description provided.