Foundation models trained on vast amounts of data have demonstrated remarkable reasoning and generation capabilities in the domains of text, images, audio and video. Our goal is to build such a foundation model for 3D intelligence, a model that can support developers in producing all aspects of a Roblox experience, from generating 3D objects and scenes to rigging characters for animation to producing programmatic scripts describing object behaviors. As we start open-sourcing a family of models towards this vision, we hope to engage others in the research community to address these goals with us.
Cube 3D is our first step towards 3D intelligence, which involves a shape tokenizer and a text-to-shape generation model. We are unlocking the power of generating 3D assets and enhancing creativity for all artists. Our latest version of Cube 3D is now accessible to individuals, creators, researchers and businesses of all sizes so that they can experiment, innovate and scale their ideas responsibly. This release includes model weights and starting code for using our text-to-shape model to create 3D assets.
Clone and install this repo in a virtual environment, via:
git clone https://github.com/Roblox/cube.git
cd cube
pip install -e .[meshlab]
CUDA: If you are using a Windows machine, you may need to install the CUDA toolkit as well as
torch
with cuda support viapip install torch --index-url https://download.pytorch.org/whl/cu124 --force-reinstall
Note:
[meshlab]
is an optional dependency and can be removed by simply runningpip install -e .
for better compatibility but mesh simplification will be disabled.
Download the model weights from hugging face or use the
huggingface-cli
:
huggingface-cli download Roblox/cube3d-v0.1 --local-dir ./model_weights
To generate 3D models using the downloaded models simply run:
python -m cube3d.generate \
--gpt-ckpt-path model_weights/shape_gpt.safetensors \
--shape-ckpt-path model_weights/shape_tokenizer.safetensors \
--fast-inference \
--prompt "Broad-winged flying red dragon, elongated, folded legs."
Note:
--fast-inference
is optional and may not be available for all GPU that have limited VRAM. This flag will also not work on MacOS.
The output will be an .obj
file saved in the specified output
directory.
If you want to render a turntable gif of the mesh, you can use the --render-gif
flag, which will render a turntable gif of the mesh
and save it as turntable.gif
in the specified output
directory.
We provide several example output objects and their corresponding text prompts in the examples
folder.
Note: You must have Blender (version >= 4.3) installed and available in your system's PATH to render the turntable GIF. You can download it from Blender's official website. Ensure that the Blender executable is accessible from the command line.
Note: If shape decoding is slow, you can try to specify a lower resolution using the
--resolution-base
flag. A lower resolution will create a coarser and lower quality output mesh but faster decoding. Values between 4.0 and 9.0 are recommended.
To tokenize a 3D shape into token indices and reconstruct it back, you can use the following command:
python -m cube3d.vq_vae_encode_decode \
--shape-ckpt-path model_weights/shape_tokenizer.safetensors \
--mesh-path ./outputs/output.obj
This will process the .obj
file located at ./outputs/output.obj
and prints the tokenized representation as well as exports the mesh reconstructed from the token indices.
We have tested our model on:
- Nvidia H100 GPU
- Nvidia A100 GPU
- Nvidia Geforce 3080
- Apple Silicon M2-4 Chips.
We recommend using a GPU with at least 24GB of VRAM available when using --fast-inference
(or EngineFast
) and 16GB otherwise.
We have designed a minimalist API that allows the use this repo as a Python library:
import torch
import trimesh
from cube3d.inference.engine import Engine, EngineFast
# load ckpt
config_path = "cube3d/configs/open_model.yaml"
gpt_ckpt_path = "model_weights/shape_gpt.safetensors"
shape_ckpt_path = "model_weights/shape_tokenizer.safetensors"
engine_fast = EngineFast( # only supported on CUDA devices, replace with Engine otherwise
config_path,
gpt_ckpt_path,
shape_ckpt_path,
device=torch.device("cuda"),
)
# inference
input_prompt = "A pair of noise-canceling headphones"
# NOTE: Reduce `resolution_base` for faster inference and lower VRAM usage
# The `top_p` parameter controls randomness between inferences:
# Float < 1: Keep smallest set of tokens with cumulative probability ≥ top_p. Default None: deterministic generation.
mesh_v_f = engine_fast.t2s([input_prompt], use_kv_cache=True, resolution_base=8.0, top_p=0.9)
# save output
vertices, faces = mesh_v_f[0][0], mesh_v_f[0][1]
_ = trimesh.Trimesh(vertices=vertices, faces=faces).export("output.obj")
output.mp4
pagoda_rot.mov
If you find this work helpful, please consider citing our technical report:
@article{roblox2025cube,
title = {Cube: A Roblox View of 3D Intelligence},
author = {Roblox, Foundation AI Team},
journal = {arXiv preprint arXiv:2503.15475},
year = {2025}
}
We would like to thank the contributors of TRELLIS, CraftsMan3D, threestudio, Hunyuan3D-2, minGPT, dinov2, OptVQ, 1d-tokenizer repositories, for their open source contributions.