Skip to content

Conversation

@WANDY666
Copy link
Contributor

@WANDY666 WANDY666 commented Jan 6, 2026

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @WANDY666, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing the performance of the _fwd_kernel_flash_decode_diverse_stage1 kernel by introducing a wide array of optimized configurations. These new settings are tailored for specific NVIDIA GPU architectures, namely the RTX 4090D and the upcoming RTX 5090, ensuring improved efficiency across different operational parameters such as GQA group sizes, batch sizes, and output data types.

Highlights

  • New Kernel Configurations: Introduced a comprehensive set of new kernel configurations for the _fwd_kernel_flash_decode_diverse_stage1 kernel.
  • GPU Specific Tuning: Added specific performance tuning parameters for NVIDIA GeForce RTX 4090D and 5090 GPUs.
  • Diverse Parameter Support: Configurations cover various gqa_group_size, max_batch_group_size, and out_dtype settings, including torch.bfloat16 and torch.float16.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@WANDY666 WANDY666 merged commit 320345d into main Jan 6, 2026
1 check passed
@WANDY666 WANDY666 deleted the tuning_stage1 branch January 6, 2026 03:06
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds kernel tuning configurations for NVIDIA GeForce RTX 4090D and 5090 GPUs. While adding these configurations is a good step for performance, I have identified two main issues. First, the configuration files for bfloat16 seem to be duplicates of the float16 files, which is likely incorrect and could lead to suboptimal performance. It's recommended to either perform separate tuning for bfloat16 or remove these files. Second, all the new JSON files are unformatted, which harms readability and maintainability. I've suggested formatting them for consistency and easier reviews in the future. Additionally, the very long filenames could potentially cause issues on some filesystems like Windows due to path length limitations.

@@ -0,0 +1 @@
{"4096": {"8": {"BLOCK_N": 16, "num_warps": 4, "num_stages": 1}, "32": {"BLOCK_N": 16, "num_warps": 4, "num_stages": 1}, "128": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}, "256": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}}, "8192": {"8": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}, "32": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}, "128": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}, "256": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}}} No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This configuration file for bfloat16 is identical to its float16 counterpart. Based on the tuning script test/kernel/llama_gqa_diverse_decode_stage1_tuning.py, it appears tuning was only performed for torch.half (float16), and the results were then copied for both float16 and bfloat16. This means the bfloat16 configurations are likely not optimal and could lead to performance degradation. It is recommended to either run the tuning specifically for bfloat16 and update these files, or remove the bfloat16 configuration files for now to avoid confusion and potential performance issues. This applies to all bfloat16 config files in this PR.

@@ -0,0 +1 @@
{"4096": {"8": {"BLOCK_N": 16, "num_warps": 4, "num_stages": 1}, "32": {"BLOCK_N": 16, "num_warps": 4, "num_stages": 1}, "128": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}, "256": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}}, "8192": {"8": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}, "32": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}, "128": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}, "256": {"BLOCK_N": 16, "num_warps": 2, "num_stages": 1}}} No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This JSON configuration file is not formatted, which makes it difficult to read and review. For better maintainability, please format the JSON content with indentation and add a newline at the end of the file. This should be applied to all new JSON files in this pull request.

{
  "4096": {
    "8": {
      "BLOCK_N": 16,
      "num_warps": 4,
      "num_stages": 1
    },
    "32": {
      "BLOCK_N": 16,
      "num_warps": 4,
      "num_stages": 1
    },
    "128": {
      "BLOCK_N": 16,
      "num_warps": 2,
      "num_stages": 1
    },
    "256": {
      "BLOCK_N": 16,
      "num_warps": 2,
      "num_stages": 1
    }
  },
  "8192": {
    "8": {
      "BLOCK_N": 16,
      "num_warps": 2,
      "num_stages": 1
    },
    "32": {
      "BLOCK_N": 16,
      "num_warps": 2,
      "num_stages": 1
    },
    "128": {
      "BLOCK_N": 16,
      "num_warps": 2,
      "num_stages": 1
    },
    "256": {
      "BLOCK_N": 16,
      "num_warps": 2,
      "num_stages": 1
    }
  }
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants