The purpose of this project is to improve upon current state of the art object detection models by creating a vision-based perception model that is capable of identifying and bboxing common road obstacles from visual data (cars, pedestirans, vans, cyclists, etc.).
- Training custom model:
- Adjust hyperparameters and current model type selection in
config.yaml - Run
train.py- Note: This is currently running on KITTI dataset and will need to be generalized for other datasets.
- This script will check for all of the necessary files and directories to perform training.
- If they do not exist, the dataset will automatically be downloaded to
DL_project/dataset
- If they do not exist, the dataset will automatically be downloaded to
- Select model and copy its path into inference.py to use a specific trained model
- Run
inference.py. Be sure to adjust the input images to infer on.
- Adjust hyperparameters and current model type selection in
dataset: All image files and corresponding labels for Training/Testingimages: Training and Testing image datatrainingtesting
labels: Labels for Training data, no labels for Testing data
figs: Custom training output figuresmodels: Saved modelssrc: Source codecustom: Custom model training and inferencedata_downloader.py: Download KITTI datadata_processing_kitti.py: Process the KITTI data for training, validation, and testingdata_processing.py: Extracts data from the chosen dataset and sets it up for traininginference.py: Run inference on the trained modelmodels.py: NN model definitions. Here you can make models and choose which to utilizeSimpleYOLO.py: SimpleYOLO model implementationTinyYOLO.py: TinyYOLO model implementationMidYOLO.py: AttentionYOLO model implementationEncoderDecoderYOLO.py: EncodeYOLO model implementationsolver_kitti.py: Core function for training the model, called fromrun.pytrain.py: Loads hyperparameters for the model and runs the training
config.yaml: Model training hyperparametersenvironment.yaml: Conda environment
Note: some other files are created and not yet stored in their appropriate location in the file structure presented above ^.
- Create env from
environment.yaml - Still need to run
pip install torchevalafter activating conda env, pkg cannot be installed from conda env