Skip to content

wzyabcas/LIGHT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Unleashing Guidance Without Classifiers for Human-Object Interaction Animation

Ziyin Wang1Sirui Xu1Chuan Guo2Bing Zhou2Jiangshan Gong1Jian Wang2Yu-Xiong Wang1Liang-Yan Gui1

1University of Illinois Urbana-Champaign
2Snap Inc.

ICLR 2026

News

  • [2026-03-27] Initial release of LIGHT.
  • [2026-03-27] Release the inference pipeline.

TODO

  • Release the data processing code
  • Release the checkpoints on all datasets
  • Release the evaluation pipeline
  • Release the training pipeline
  • Release the augmentated data

General Description

We introduce LIGHT, a pipeline that generates realistic human-object interaction animations by denoising different components of the motion at different speeds, so cleaner components naturally guide noisier ones - producing contact-aware guidance without any external classifiers or hand-crafted priors.

Preparation

Please follow these steps to get started
  1. Download SMPL+H and SMPL-X.

    Download SMPL+H mode from SMPL+H (choose Extended SMPL+H model used in the AMASS project), DMPL model from DMPL (choose DMPLs compatible with SMPL), and SMPL-X model from SMPL-X. Then, please place all the models under ./models/. The ./models/ folder tree should be:

    models
    │── smplh
    │   ├── female
    │   │   ├── model.npz
    │   ├── male
    │   │   ├── model.npz
    │   ├── neutral
    │   │   ├── model.npz
    │   ├── SMPLH_FEMALE.pkl
    │   ├── SMPLH_MALE.pkl
    │   └── SMPLH_NEUTRAL.pkl    
    └── smplx
        ├── SMPLX_FEMALE.npz
        ├── SMPLX_FEMALE.pkl
        ├── SMPLX_MALE.npz
        ├── SMPLX_MALE.pkl
        ├── SMPLX_NEUTRAL.npz
        └── SMPLX_NEUTRAL.pkl
    

    Please follow smplx tools to merge SMPL-H and MANO parameters.

  2. Prepare Environment

  • Create and activate a fresh environment:

    conda create -n light python=3.10
    conda activate light
    pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu118

    To install PyTorch3D, please follow the official instructions: Pytorch3D.

    Install remaining packages:

    pip install -r requirements.txt
    
  1. Prepare data
  • OMOMO

    Download the processed dataset from this link

    Expected File Structure:

    InterAct/omomo
    ├── objects
    │   └── object_name
    │       └── object_name.obj
    └── sequences_canonical
    	└── id
    		├── data.npz
  1. Download pretrained checkpoints

    Download the pretrained checkpoints from this link, and put in ./save/.

Inference

To inference with trained models, execute the following steps

  • Generate without guidance:

    bash ./scripts/generate.sh
    
  • Generate with our guidance:

    bash ./scripts/generate_guide.sh
    

Citation

If you find this repository useful for your work, please cite:

@inproceedings{wang2026unleashing,
      title = {Unleashing Guidance Without Classifiers for Human-Object Interaction Animation},
      author = {Wang, Ziyin and Xu, Sirui and Guo, Chuan and Zhou, Bing and Gong, Jiangshan and Wang, Jian and Wang, Yu-Xiong and Gui, Liang-Yan},
      booktitle = {ICLR},
      year = {2026}
    }

Please also consider citing the InterAct benchmark that we built our model upon:

@inproceedings{xu2025interact,
    title     = {{InterAct}: Advancing Large-Scale Versatile 3D Human-Object Interaction Generation},
    author    = {Xu, Sirui and Li, Dongting and Zhang, Yucheng and Xu, Xiyan and Long, Qi and Wang, Ziyin and Lu, Yunzhi and Dong, Shuchang and Jiang, Hezi and Gupta, Akshat and Wang, Yu-Xiong and Gui, Liang-Yan},
    booktitle = {CVPR},
    year      = {2025}}

Releases

No releases published

Packages