Skip to content

Add conditions_embeddings argument to TransformerBlock, TransformerLayer for DiT (diffusion transformer)#4111

Open
huvunvidia wants to merge 3 commits intoNVIDIA:devfrom
huvunvidia:huvu/diffusion_condition_embeddings
Open

Add conditions_embeddings argument to TransformerBlock, TransformerLayer for DiT (diffusion transformer)#4111
huvunvidia wants to merge 3 commits intoNVIDIA:devfrom
huvunvidia:huvu/diffusion_condition_embeddings

Conversation

@huvunvidia
Copy link
Copy Markdown
Contributor

@huvunvidia huvunvidia commented Apr 2, 2026

What does this PR do ?

The PR is for introducing an extra argument conditions_embeddings for TransformerLayer, TransformerBlock.
This argument is for providing timesteps embeddings for all diffusion models.
Currently, to inject timesteps embeddings, we need workaround method:

  • Wan/Cosmos text-to-video: using attention_mask as a place holder (link), which is not a good practice
  • FLUX: overriding TransformerLayer to support that argument, and avoid TransformerBlock all together (link)

Since this timesteps/conditions embeddings are used by almost all diffusion models, it would be reasonable to add official argument for it.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@huvunvidia huvunvidia requested review from a team as code owners April 2, 2026 19:19
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 2, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@huvunvidia
Copy link
Copy Markdown
Contributor Author

/ok to test fe55000

@huvunvidia
Copy link
Copy Markdown
Contributor Author

/ok to test 79e241d

…onTransformerLayer

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@huvunvidia
Copy link
Copy Markdown
Contributor Author

/ok to test becbc3d

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants