Code for the paper "Frame-Level Label Refinement for Skeleton-Based Weakly-Supervised Action Recognition" (AAAI 2023).
Architecture of Network
conda create -n stal python=3.7
conda activate stal
conda install pytorch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1 -c pytorch
pip install -r requirements.txtDue to the distribution policy of AMASS dataset, we are not allowed to distribute the data directly. We provide a series of script that could reproduce our motion segmentation dataset from BABEL dataset.
Download AMASS Dataset and BABEL Dataset. Unzip and locate them in the dataset folder.
Prepare the SMPLH Model following this and put the merged model SMPLH_male.pkl into the human_model folder.
The whole directory should be look like this:
Skeleton-Temporal-Action-Localization
│ README.md
│ train.py
| ...
|
└───config
└───prepare
└───...
│
└───human_model
│ └───SMPLH_male.pkl
│
└───dataset
└───amass
| └───ACCAD
| └───BMLmovi
| └───...
│
└───babel_v1.0_release
└───train.json
└───val.json
└───...
And also clone the BABEL offical code into the dataset folder.
git clone https://github.com/abhinanda-punnakkal/BABEL.git dataset/BABELFinally, the motion segmentation dataset can be generate by:
bash prepare/generate_dataset.shTrain and evaluate the model with subset-1 of BABEL, run following commands:
python train.py --config config/train_split1.yamlOur codes are based on BABEL, 2s-AGCN and FAC-Net.
@InProceedings{yu2023frame,
title={Frame-Level Label Refinement for Skeleton-Based Weakly-Supervised Action Recognition},
author={Yu, Qing and Fujiwara, Kent},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={37},
number={3},
pages={3322--3330},
year={2023}
}
Additionally, this repository contains third-party software. Refer NOTICE.txt for more details and follow the terms and conditions of their use.
