Skip to content

Fix layerwise expt dp size 1 and contention with element wise distributed optimizer#4138

Open
skyw wants to merge 8 commits intoNVIDIA:mainfrom
skyw:fix/layerwise-expt-dp-size-1
Open

Fix layerwise expt dp size 1 and contention with element wise distributed optimizer#4138
skyw wants to merge 8 commits intoNVIDIA:mainfrom
skyw:fix/layerwise-expt-dp-size-1

Conversation

@skyw
Copy link
Copy Markdown
Contributor

@skyw skyw commented Apr 3, 2026

What does this PR do ?

fix couple of issues:

  • In rare case expt dp size 1, bucket.layerwise_param_flat_sizes can be None and cause later code to crash.
  • should_disable_forward_pre_hook() in traing.py depends on deprecated "dist_" prefix of optimizer name.
  • Some checks of use_distributed_optimizer in arguments.py can fail unnecessarily with layer wise.

A further clean up of making element wise distributed optimizer more clear is also planned.

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

skyw added 4 commits April 3, 2026 14:26
Signed-off-by: Hao Wu <skyw@nvidia.com>
Signed-off-by: Hao Wu <skyw@nvidia.com>
Signed-off-by: Hao Wu <skyw@nvidia.com>
Signed-off-by: Hao Wu <skyw@nvidia.com>
@skyw skyw requested review from a team as code owners April 3, 2026 22:57
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 3, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft April 3, 2026 22:57
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 3, 2026

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@skyw skyw requested review from deepakn94 and mchrzanowski April 3, 2026 22:58
@skyw skyw marked this pull request as ready for review April 3, 2026 23:08
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team April 3, 2026 23:08
@skyw
Copy link
Copy Markdown
Contributor Author

skyw commented Apr 3, 2026

/ok to test 17747cb

@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Apr 3, 2026
@skyw skyw changed the title Fix/layerwise expt dp size 1 Fix layerwise expt dp size 1 and contention with element wise distributed optimizer Apr 3, 2026
@skyw
Copy link
Copy Markdown
Contributor Author

skyw commented Apr 4, 2026

/ok to test 48f891c

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 4, 2026

/ok to test 48f891c

@skyw, there was an error processing your request: E2

See the following link for more information: https://docs.gha-runners.nvidia.com/cpr/e/2/

@skyw
Copy link
Copy Markdown
Contributor Author

skyw commented Apr 4, 2026

/ok to test 42ca188

Copy link
Copy Markdown
Contributor

@mchrzanowski mchrzanowski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add some unit tests to make sure dp=1 continues to work?

lgtm modulo request for tests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants