My custom deep learning experimental platform. This template supports automated custom deep learning model training and record the results.
I assume that you have at least 3 dataset - CIFAR10, CIFAR100, ImageNet on '/dataset' folder. You can modify default dataset location at train.py, also you can freely install more datasets.
- architectures: In default, Baseline(naive one) / Quantization_Aware_Training(QAT) / Squeeze_and_Excitation(SE) is located. You can delete existing architectures and add your experimental architectures freely.
- materials: sample batches from real dataset(CIFAR and ImageNet) to visualize your architecture's logic, output, ... are located here, also you can put the images/figures here to explain your algorithm more detail.
- Q: Queue to accomodate the series of experiments. In default, sample configurations of Baseline/QAT/SE are located in.
- analyze.py: tools and helpful functions to analyze.
- train.py: train related components and functions, also it is the training script itself.
- unittest.ipynb: visual unittest of your models(based on architectures) and components works well with samples in 'materials' folder.
After you generate the repository based on this template, you have to add 'experiments' folder which will archieve your experiment results(including models, config files, logs, ...) Also you have to unstage materials/, unittest.ipynb, cause they are heavy and non-essential components.
- write your (experiment) architectures at architectures/ folder.
- check that works well with unittest.ipynb.
- push the configs to experiment to Q/
- run the script:
nohup python train.py Q > log.out; jobs; tail -f log.out
- analyze the results with analyze.py. (maybe you can make new notebook to visualize)
Sample configuration's train result log visualized by tensorboard.