Skip to content

sail-sg/TeamHOI

Repository files navigation


👥 TeamHOI: Learning a Unified Policy for Cooperative
Human-Object Interactions with Any Team Size

Stefan LionarGim Hee Lee

GarenaSea AI LabNational University of Singapore

CVPR 2026

     

✨ Overview

TeamHOI is a novel framework for learning a unified decentralized policy for cooperative human-object interactions (HOI) that works seamlessly across varying team sizes and object configurations.

We evaluate our framework on a cooperative table transport task, where multiple agents must coordinate to form stable lifting formation and subsequently carry the object.

Teaser

2 Agents 4 Agents 8 Agents

🚀 Getting Started

To set up TeamHOI, follow the steps below:

1. Clone the repository and create conda environment

git clone https://github.com/sail-sg/TeamHOI.git
cd TeamHOI

conda create -n teamhoi python=3.8.20
conda activate teamhoi

2. Download and install IsaacGym

wget https://developer.nvidia.com/isaac-gym-preview-4
tar -xvzf isaac-gym-preview-4
pip install -e isaacgym/python

If encounter an error ImportError: libpython3.8m.so.1.0: cannot open shared object file: No such file or directory, do the following:

export LD_LIBRARY_PATH="/path/to/conda/envs/teamhoi/lib:$LD_LIBRARY_PATH"

3. Install other dependencies

pip install -r requirements.txt

🤖 Inference

teaser

A sample of inference command is as follows:
python teamhoi/run.py --system_max_humanoids 8 --num_envs 3 --fix_num_humanoids 8 \
--checkpoint checkpoints/8agents.pth \
--assets default_round,default_square,default_rectangle \
--episode_length 600 --test

Key arguments:

  • --system_max_humanoids: maximum team size used during training. It should match the configuration used during training.
  • --fix_num_humanoids: number of agents used at inference simulation. If not specified, it will be sampled from 2 to --system_max_humanoids.
  • --num_envs: number of parallel simulation environments.
  • --checkpoint: path to pretrained policy.
  • --assets: object names (comma-separated) specified in assets folder.
  • --episode_length: episode horizon (simulation steps).

🦾 Training

1. Stage 1: Walk + lift (4 agents)

python teamhoi/run.py --system_max_humanoids 4 --num_envs 1024 --min_humanoids 1 \
--cfg_train cfg/teamhoi_stage1.yaml --motion_file2 cfg_motions/near_table_stage1.yaml \
--exp_name MyExperiment --wandb_name "MyExperiment_4Ag_stage1" \
--assets default_round,default_square,default_rectangle \
--goal_multiplier 0 --episode_length 400 --headless

2. Stage 1 + 2: Walk + lift + transport (4 agents)

python teamhoi/run.py --system_max_humanoids 4 --num_envs 1024 --min_humanoids 2 \
--cfg_train cfg/teamhoi.yaml --motion_file2 cfg_motions/near_table.yaml \
--exp_name MyExperiment --wandb_name "MyExperiment_4Ag" \
--assets default_round,default_square,default_rectangle \
--goal_multiplier 1 --episode_length 600 --headless

3. Stage 1 + 2: Walk + lift + transport (8 agents finetune)

python teamhoi/run.py --system_max_humanoids 8 --num_envs 1024 --min_humanoids 2 \
--cfg_train cfg/teamhoi.yaml --motion_file2 cfg_motions/near_table.yaml \
--exp_name MyExperiment --wandb_name "MyExperiment_8Ag" \
--assets default_round,default_square,default_rectangle \
--goal_multiplier 1 --episode_length 600 --headless

Additional key arguments:

  • --min_humanoids: Minimum team size when sampling team sizes (range: --min_humanoids to --system_max_humanoids).
  • --motion_file2: Reference motions used when agents are near objects.
  • --exp_name: Experiment name. Checkpoints will be saved to output/[experiment name].
  • --goal_multiplier: A multiplier for transport goal state. Set to 0 to disable transport goal state during stage 1 training.

Training always continues from output/[experiment name]/ckpt.pth.

🙏 Acknowledgment

This work builds upon ideas and open-source implementations from the following projects:

We sincerely thank the authors for making their research and code publicly available.