implemented my own from-scratch interpretation of https://arxiv.org/abs/1703.10593 (Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks) (only simplification was not using 9 residual blocks (used just 6 for training speed purposes)
used the Monet2Photo dataset on kaggle to train generators to create photos from monet paintings and create monet paintings from photos (unpaired)
was only able to train for 5 epochs b/c each epoch took roughly 30 minutes
here's sample results, i'd say the photo->monet generator was pretty good but the monet->photo generator needs some extra work
technically this could be adjusted by modifying the loss function to penalize this aspect more harshly but i decided to stick to the paper's implementation as closely as possible, even including the image buffer class
