Skip to content

Repository for 3DV2022 paper "Domain Adaptive 3D Pose Augmentation for In-the-wild Human Mesh Recovery"

License

Notifications You must be signed in to change notification settings

ZZWENG/DAPA_release

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Domain Adaptive 3D Pose Augmentation for In-the-wild Human Mesh Recovery (3DV 2022)

Project Page | Paper

Domain Adaptive 3D Pose Augmentation (DAPA) is a data augmentation method that enhances the HMR model's generalization ability in in-the-wild scenarios. DAPA combines the strength of methods based on synthetic datasets by getting direct supervision from the synthesized meshes, and domain adaptation methods by using only 2D keypoints from the target dataset.

Examples on challenging sports poses in the wild.

Installation instructions

Tested on Ubuntu 16.

conda create -n dapa python==3.6.9
conda activate dapa
pip install --r requirements.txt
git clone https://github.com/nghorbani/human_body_prior.git
cd human_body_prior; python setup.py install; cd ../
  1. Fetch the dependency data using the script from SPIN.
  2. Put the smpl_uv.obj file and smpl model files, and VPoser prior checkpoint in data folder. Double check that the paths SMPL_MODEL_DIR, VPOSER_PATH and UV_MESH_FILE are set correctly in config.py
  3. Install the external dependency for the texture model using this script

Training/Evaluation

AGORA experiments

Data Preparation

Download the train/valid/test images and train/valid ground truths from AGORA website. Run OpenPose on the valid/test images. The final data folder has the following structure

- data/agora
    - train_images_3840x2160
    - Cam  # ground truth for the train split
    - validation_images_3840x2160
        - validation  # images
        - keypoints  # OpenPose json files
    - validation_SMPL  # ground truth for the val split
    - test_images_3840x2160
        - test   # images
        - keypoints  # OpenPose json files

Run the data preprocessing script

python -m datasets.agora.preprocess_agora_train
python -m datasets.agora.preprocess_validset_from_openpose valid
python -m datasets.agora.preprocess_validset_from_openpose test

These will generate .npz files in the path specified by config.DATASET_NPZ_PATH

Run training code

We provide the pretrained model checkpoint as part of the supplementary for ease of reproducing the finetuning results. The pretrained checkpoint and the finetuned checkpoints are can be accessed via this anonymous link.

checkpoint=2021_10_23-02_02_06.pt

# our finetuning command
python train.py --name ours \
	--checkpoint ${checkpoint} \
	--resume \
	--checkpoint_steps 500 \
	--log_dir logs \
	--agora \
	--test_steps 1200 \
	--rot_factor 0 \
	--ignore_3d \
	--add_background \
	--use_texture \
	--g_input_noise_scale 0.5 \
	--g_input_noise_type mul \
	--vposer
	
# baseline (SPIN-ft-AGORA-2D) finetuning command
python train.py --name spin \
	--checkpoint ${checkpoint} \
	--resume \
	--checkpoint_steps 500 \
	--log_dir logs \
	--agora \
	--test_steps 1200 \
	--rot_factor 0 \
	--ignore_3d \
	--adapt_baseline \
	--run_smplify

The finetuned checkpoint performance on the validation set is

Models filename MPJPE NMJE MVE NMVE F1 Precision Recall
SPIN-ft-AGORA-2D agora_ft_spin_2d.pt 166 218.4 165.1 217.2 0.76 0.9 0.65
DAPA (Ours) agora_dapa.pt 159.4 209.7 158.6 208.7 0.76 0.9 0.65

Run evaluation code

  1. First, clone and build agora_evaluation to get the agora_evaluation cli.
  2. Prepare prediction files.
name=spin_ft
basePath=path_to_store_pred_files
mkdir ${basePath}/${name}
mkdir ${basePath}/${name}/valid_predictions
python prepare_agora_prediction.py --out_folder ${basePath}/${name}/valid_predictions --checkpoint spin_agora_ft.pt --split validation
  1. Run evaluation on valid set
debugPath=${basePath}/${name}/debug
resultPath=${basePath}/${name}/results
pred_path=${basePath}/${name}/valid_predictions

imgFolder=data/validation_images_3840x2160/validation
gtFolder=data/validation_SMPL/SMPL/
utilsPath=agora_evaluation/utils
smplPath=data/body_models/smpl

evaluate_agora --pred_path $pred_path --result_savePath $resultPath --imgFolder $imgFolder --loadPrecomputed $gtFolder --modeltype SMPL --indices_path $utilsPath --kid_template_path $utilsPath/smpl_kid_template.npy  --modelFolder $smplPath --baseline demo_model --debug --debug_path $debugPath
  1. Submit test predictions to test server
mkdir ${basePath}/${name}/predictions
python prepare_agora_prediction.py --out_folder ${basePath}/${name}/predictions --checkpoint spin_agora_ft.pt --split test
zip preds.zip ${basePath}/${name}/predictions/*

Run on an arbitraty video

  1. Download a youtube video and extract frames
pip install youtube-dl
youtube-dl PSBOjqCtpEU

mv PSBOjqCtpEU.mkv ./demo

python demo/preprocess.py --path ./demo --fps 5

  1. Run OpenPose
singularity exec --pwd openpose --nv --bind ./demo/PSBOjqCtpEU:/mnt /oak/stanford/groups/syyeung/containers/openpose.sif ./build/examples/openpose/openpose.bin --image_dir /mnt/images/ --face --hand --display 0 --render_pose 1 --write_images /mnt/keypoints --write_json /mnt/keypoints

Now demo folder will have the following structure

- demo
	- images
	- keypoints
  1. Run preprocessing script to store the keypoints and bounding boxes in a npz file.
python demo/preprocess_from_keypoints_video.py --input_path ./demo --out_path DATASET_NPZ_PATH

DATASET_NPZ_PATH is specified in config.py.

  1. Add the new dataset to DATASET_FILES and DATASET_FOLDERS in config.py, and run the training script.
./scripts/finetune_gym.sh

Run on SEEDLingS

Coming.

Acknowledgement

The implementation took reference from SPIN, CMR. We thank the authors for their generosity to release code.

Citation

If you find our work useful, please consider citing:

@inproceedings{weng2022domain,
  title={Domain Adaptive 3D Pose Augmentation for In-the-wild Human Mesh Recovery},
  author={Weng, Zhenzhen and Wang, Kuan-Chieh and Kanazawa, Angjoo and Yeung, Serena},
  booktitle={International Conference on 3D Vision},
  year={2022}
}

About

Repository for 3DV2022 paper "Domain Adaptive 3D Pose Augmentation for In-the-wild Human Mesh Recovery"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published