torch, numpy, cv2, argparse, PIL, skimage.metrics, matplotlib
- Create dataloader:
dataloader.py - Add validation split
- Add discriminator to state-of-the-art model RRIN and mark changes:
model.py - Argparse for custom settings
- Evaluation metrics (PSNR and SSIM):
evaluate.py - Training/validation loops and saving statistics:
train.py - Plotting statistics:
plot_stats.py - Generate interpolated frame, optical flow estimates and weight maps:
generate.py - Video converter:
convert_vid.py
The pretrained weights for our 4 experiments can be found in the weights folder, with the format model_{learning_rate}_{batch_size}.pt.
The experimental data folders for our 4 experiments can be found in the exps folder, with the format exps_{learning_rate}_{batch_size}. Refer to the "Plot Statistics" section to know how to visualise the data.
Some of our generated samples can be found in the results folder:
happiness_facial: results for a facial expression of happiness from the Human ID Project at the The University of Texas at Dallas.vimeo_90k_test: results for the triplet00001/0830in Vimeo-90k, which is part of the test set.
python3 train.py --vimeo_90k_path /path/to/vimeo-90k/ --save_stats_path /path/to/folder/to/save/experiment/details/ --save_model_path /path/to/save/model/weights.pt
- This trains a new model on the Vimeo-90k train set.
- You can specify
--num_epochs,--batch_sizeand--lras hyperparameters. - Use
--eval_everyto specify how often we evaluate the model using the validation set, and save the losses. - Use
--max_num_imagesif you do not want to train on the whole dataset. - Specify
--timeitif you want timing estimates. - Use
--time_check_everyto decide how often you want timing estimates, based on the number of batches per interval.
python3 plot_stats.py --exp_dir path/to/experiment/folder/
- This plots the loss graphs for an experiment.
python3 evaluate.py --vimeo_90k_path /path/to/vimeo-90k/ --saved_model_path /path/to/model/weights.pt
- This evaluates a model on the Vimeo-90k test set for PSNR and SSIM.
python3 generate.py --frames_path /path/to/folder/with/target/frames/ --saved_model_path /path/to/model/weights.pt
- This generates the interpolated middle frame of 2 frames, and the corresponding optical flow estimates and weight maps.
- The frames in your
--frames_pathwill be sorted, only the first and second frames will be used. - A folder containing the outputs will be created in your
--frames_path. - This is optional, but you can set the timestep for interpolation with
--t. It can range from 0 to 1, with 0.5 being the midpoint and the default value. - NOTE: The two input frames must have the same size, but can be any size individually.
python3 convert_vid.py --vid_path /path/to/input/video.mp4 --save_vid_path /path/to/save/video.mp4 --saved_model_path /path/to/model/weights.pt
- You can use
--print_everyto specify the frame interval for printing progress. - NOTE: The two input frames must have the same size, but can be any size individually.