Skip to content

Few questions about the final training setup #87

@RuiqingYoung

Description

@RuiqingYoung

Hi, thanks for your great work on LlamaGen!

I'm currently trying to reproduce your training strategy for a similar image modeling task. I have read through the paper and code, but I still have a few questions about the final training setup:

  1. How many GPUs (and what type) were used for training the full model?
  2. What was the global batch size and sequence length during training?
  3. Did you use any specific memory optimization techniques such as gradient checkpointing or ZeRO?

Any insight you could share would be extremely helpful for reproducing and scaling the training process. Thanks again!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions