Clip mtp grads separately when mtp_detach_heads=True#4116
Draft
yfw wants to merge 8 commits intoNVIDIA:mainfrom
Draft
Clip mtp grads separately when mtp_detach_heads=True#4116yfw wants to merge 8 commits intoNVIDIA:mainfrom
yfw wants to merge 8 commits intoNVIDIA:mainfrom
Conversation
Add grad_norm_skip_threshold config (default 1000) to skip gradient updates when the gradient norm is too large. Zeroes out gradients instead of applying them to prevent training instability from gradient spikes. Co-authored-by: Gerald Shen <geshen@nvidia.com> Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Add mtp_grad_scale_func to ModelParallelConfig to allow independent loss scaling for MTP. Falls back to grad_scale_func if not provided. This enables RL training to use different loss scales for main and MTP losses. Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
When labels are not provided (RL training), create shifted labels from input_ids by rolling the tensor. Also handles None tensors in roll_tensor, fixes checkpointed_forward to handle non-tensor kwargs, and clamps num_tokens to prevent division by zero. Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Refactor MTPLossLoggingHelper to track acceptance rate (correct/total predictions) alongside loss metrics. Computes distributed argmax across tensor parallel ranks to determine predictions without gathering full vocab logits. Logs per-step and cumulative acceptance rates. Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Add mtp_detach_heads option to TransformerConfig. When enabled, detaches hidden states before passing to MTP heads, preventing MTP loss gradients from flowing back to the main model. Only the MTP heads are trained. Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Contributor
|
This PR has been automatically converted to draft because all PRs must start as drafts. When you are ready for review, click Ready for Review to begin the review process. This will:
See the contribution guide for more details. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do ?
Clip MTP gradients independently when mtp_detach_heads=True
When MTP heads are detached from the main model (mtp_detach_heads=True), the MTP parameter gradients can have very different magnitudes from the main model gradients. Clipping them together using a single global grad norm leads to either under-clipping MTP grads or over-clipping main model grads.
This PR separates MTP gradient clipping from the main model by:
Note: This PR is part of a related series of MTP post-training improvements. They should be reviewed and merged in the following order, as each builds on the previous:
mtp_grad_scale_funcfor MTP loss scalingmtp_detach_heads=True(this PR)Contribution process
Pre-checks
Code review
Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.
Step 1: Mark PR as "Ready for Review"
.github/CODEOWNERS.Final Review might get declined if these requirements are not fulfilled.
Step 2: Final Review
For PRs that change
megatron/core, once all expert reviewers have approved, theFinal Reviewlabel is applied automatically and final reviewers are assigned.For PRs outside
megatron/core, this step is skipped.Step 3: Approved
Once all required reviewers have approved, the
Approvedlabel is applied automatically.Merge
Any member of mcore-engineers will be able to merge your PR.
For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.