Ziyin Wang1 Sirui Xu1 Chuan Guo2 Bing Zhou2 Jiangshan Gong1 Jian Wang2 Yu-Xiong Wang1 Liang-Yan Gui1
1University of Illinois Urbana-Champaign
2Snap Inc.
ICLR 2026
- [2026-03-27] Initial release of LIGHT.
- [2026-03-27] Release the inference pipeline.
- Release the data processing code
- Release the checkpoints on all datasets
- Release the evaluation pipeline
- Release the training pipeline
- Release the augmentated data
We introduce LIGHT, a pipeline that generates realistic human-object interaction animations by denoising different components of the motion at different speeds, so cleaner components naturally guide noisier ones - producing contact-aware guidance without any external classifiers or hand-crafted priors.
Please follow these steps to get started
-
Download SMPL+H and SMPL-X.
Download SMPL+H mode from SMPL+H (choose Extended SMPL+H model used in the AMASS project), DMPL model from DMPL (choose DMPLs compatible with SMPL), and SMPL-X model from SMPL-X. Then, please place all the models under
./models/. The./models/folder tree should be:models │── smplh │ ├── female │ │ ├── model.npz │ ├── male │ │ ├── model.npz │ ├── neutral │ │ ├── model.npz │ ├── SMPLH_FEMALE.pkl │ ├── SMPLH_MALE.pkl │ └── SMPLH_NEUTRAL.pkl └── smplx ├── SMPLX_FEMALE.npz ├── SMPLX_FEMALE.pkl ├── SMPLX_MALE.npz ├── SMPLX_MALE.pkl ├── SMPLX_NEUTRAL.npz └── SMPLX_NEUTRAL.pklPlease follow smplx tools to merge SMPL-H and MANO parameters.
-
Prepare Environment
-
Create and activate a fresh environment:
conda create -n light python=3.10 conda activate light pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu118
To install PyTorch3D, please follow the official instructions: Pytorch3D.
Install remaining packages:
pip install -r requirements.txt
- Prepare data
-
OMOMO
Download the processed dataset from this link
Expected File Structure:
InterAct/omomo ├── objects │ └── object_name │ └── object_name.obj └── sequences_canonical └── id ├── data.npz
-
Download pretrained checkpoints
Download the pretrained checkpoints from this link, and put in
./save/.
To inference with trained models, execute the following steps
-
Generate without guidance:
bash ./scripts/generate.sh -
Generate with our guidance:
bash ./scripts/generate_guide.sh
If you find this repository useful for your work, please cite:
@inproceedings{wang2026unleashing,
title = {Unleashing Guidance Without Classifiers for Human-Object Interaction Animation},
author = {Wang, Ziyin and Xu, Sirui and Guo, Chuan and Zhou, Bing and Gong, Jiangshan and Wang, Jian and Wang, Yu-Xiong and Gui, Liang-Yan},
booktitle = {ICLR},
year = {2026}
}Please also consider citing the InterAct benchmark that we built our model upon:
@inproceedings{xu2025interact,
title = {{InterAct}: Advancing Large-Scale Versatile 3D Human-Object Interaction Generation},
author = {Xu, Sirui and Li, Dongting and Zhang, Yucheng and Xu, Xiyan and Long, Qi and Wang, Ziyin and Lu, Yunzhi and Dong, Shuchang and Jiang, Hezi and Gupta, Akshat and Wang, Yu-Xiong and Gui, Liang-Yan},
booktitle = {CVPR},
year = {2025}}
