Official Pytorch implementation for the journal article:
@article{mo2022rganet,
title = {Realtime Global Attention Network for Semantic Segmentation},
author = {Mo, Xi and Chen, Xiangyu},
journal = {IEEE Robotics and Automation Letters with ICRA 2022 Presentation},
year = {2022},
month = jan,
volumn = {7},
number = {2},
pages = {1574-1580},
publisher = {IEEE},
doi = {10.1109/LRA.2022.3140443}
}
python >= 3.5
pytorch >= 1.0.0
(optional) thop, apex, tqdm
- Create the folder
checkpointin the root directory, download our pretrained checkpoint (53.1MB) to the folder, then run the demo:
python demo.pySuction area predictions should be saved in the folder
sample, or specify thepath/to/checkpointandpath/to/samplesusing-cand-dargs respectively.
- Prepare dataset and train from scratch
Please consult
utils/configuraion.pyif you want to customize training setup, then download suction-based-grasping-dataset.zip (1.6GB), create the folderdatasetin the root directory. There are two ways to train RGANet-NB:Extract the main folder
suction-based-grasping-datasettodataset, run(default) > python RGANet.py -trainExtract the main folder to somewhere else and specify the paths:
(customized) > python RGANet.py -train -i path/to/color-input -l path/to/label
- Restore training from checkpoint
By default, RGANet read the latest checkpoint from the folder
checkpoint, you can also specify the checkpoint using arg-c:python RGANet.py -train -r -c path/to/checkpoint
The checkpoint is required before any test, and we only present the RGANet-NB architecture. Please consult
utils/configuraion.pyif you want to customize the testing, then runpython RGANet.py -testRefer to arg
-dto specify the path to save predictions.
We provide both online validation and offline valuation. The online validation runs tests on all checkpoints and estimates the checkpoint that may has the best performance. Offline validation requires predictions saved to disk beforehand, i.e., run test w/ predictions written to disk before any offline validation. Please make sure you've set desired options in
utils/configuraion.py. Run
python RGANet.py -vorpython RGANet.py -test -v
Refer to arg-dto specify the path to predictions.
- In addition to the functions provided above, we also provide useful tools:
| name | illustration |
|---|---|
| calculator.py | evaluation statistics, calculate model parameters |
| eval_adaptor.py | split items of validation result to separate files |
| models.py | standalone training, validation of segmentation models |
| pred_transform.py | convert other predictions to processable images |
| proportion.py | compute adaptive weights for CE loss and focal loss |
| seg_models.py | standalone runtime test for segmentation models |
Apache 2.0