Skip to content

Clip mtp grads separately when mtp_detach_heads=True#4116

Draft
yfw wants to merge 8 commits intoNVIDIA:mainfrom
yfw:yifu/clip_mtp_grads
Draft

Clip mtp grads separately when mtp_detach_heads=True#4116
yfw wants to merge 8 commits intoNVIDIA:mainfrom
yfw:yifu/clip_mtp_grads

Conversation

@yfw
Copy link
Copy Markdown

@yfw yfw commented Apr 2, 2026

What does this PR do ?

Clip MTP gradients independently when mtp_detach_heads=True

When MTP heads are detached from the main model (mtp_detach_heads=True), the MTP parameter gradients can have very different magnitudes from the main model gradients. Clipping them together using a single global grad norm leads to either under-clipping MTP grads or over-clipping main model grads.

This PR separates MTP gradient clipping from the main model by:

  • Tagging all MTP block parameters with is_mtp_param=True so the optimizer can distinguish them
  • Refactoring get_main_grads_for_grad_norm into a shared _filter_grads_for_norm helper that accepts a parameter filter predicate
  • Computing and clipping MTP gradient norms independently from the main model gradient norm
  • Propagating the is_mtp_param attribute through Float16OptimizerWithFloat16Params and DistributedOptimizer param copies
  • Applying grad_norm_skip_threshold independently to MTP grads in ChainedOptimizer
  • Exposing mtp_grad_norm on the optimizer for external logging

Note: This PR is part of a related series of MTP post-training improvements. They should be reviewed and merged in the following order, as each builds on the previous:

  1. #3460 — Skip gradient updates when grad norm exceeds threshold
  2. #3459 — Add separate mtp_grad_scale_func for MTP loss scaling
  3. #3458 — Roll input IDs for MTP labels in RL mode
  4. #3457 — Add MTP acceptance rate metrics
  5. #3456 — Detach MTP heads from the main model
  6. #4116 — Clip MTP gradients independently when mtp_detach_heads=True (this PR)

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

yfw and others added 8 commits April 2, 2026 13:49
Add grad_norm_skip_threshold config (default 1000) to skip gradient
updates when the gradient norm is too large. Zeroes out gradients
instead of applying them to prevent training instability from
gradient spikes.

Co-authored-by: Gerald Shen <geshen@nvidia.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Add mtp_grad_scale_func to ModelParallelConfig to allow independent
loss scaling for MTP. Falls back to grad_scale_func if not provided.
This enables RL training to use different loss scales for main and
MTP losses.

Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
When labels are not provided (RL training), create shifted labels from
input_ids by rolling the tensor. Also handles None tensors in roll_tensor,
fixes checkpointed_forward to handle non-tensor kwargs, and clamps
num_tokens to prevent division by zero.

Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Refactor MTPLossLoggingHelper to track acceptance rate (correct/total
predictions) alongside loss metrics. Computes distributed argmax across
tensor parallel ranks to determine predictions without gathering full
vocab logits. Logs per-step and cumulative acceptance rates.

Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Add mtp_detach_heads option to TransformerConfig. When enabled, detaches
hidden states before passing to MTP heads, preventing MTP loss gradients
from flowing back to the main model. Only the MTP heads are trained.

Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
@yfw yfw requested review from a team as code owners April 2, 2026 21:43
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 2, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft April 2, 2026 21:43
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 2, 2026

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant