Skip to content

Commit a9a1320

Browse files
committed
Update README.md
1 parent 9ff2c67 commit a9a1320

File tree

3 files changed

+37
-0
lines changed

3 files changed

+37
-0
lines changed

README.md

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,33 @@
1+
# Image Colorization using U-Net and GAN
2+
3+
## Overview
4+
This project implements an advanced image colorization system that transforms grayscale images into naturally colorized outputs using deep learning. By combining U-Net architecture with Generative Adversarial Networks (GANs), our system learns to predict plausible colors for black and white images.
5+
6+
## Setup
7+
To install all required dependencies:
8+
```
9+
pip install -r requirements.txt
10+
```
11+
12+
## Data
13+
The model trains on the Flowers102 dataset, which provides high-quality flower images with diverse colors and patterns. While the dataset includes class labels, we don't use them since our colorization process is independent of flower species.
14+
15+
We chose to work in the CIELAB color space over RGB for three key advantages: it separates brightness (L\*) from color (a\* and b\*), reduces the prediction space from three channels to two, and provides perceptually uniform color representation. This makes the training process more efficient and produces better results.
16+
17+
## Model Architecture
18+
19+
### Generator (U-Net)
20+
The generator implements a U-Net architecture with skip connections between encoder and decoder paths. These connections preserve spatial details by combining low-level features with high-level abstractions. The network receives grayscale input (L channel) and outputs color predictions (\*a\*b channels).
21+
22+
### Discriminator (PatchGAN)
23+
The discriminator uses a PatchGAN design that classifies N×N patches as real or fake, rather than evaluating the entire image at once. This patch-based approach improves the quality of local textures and color transitions. The discriminator compares the generated colorization against real color images during training.
24+
25+
## Results
26+
27+
<p align="center">
28+
<img src="images/results.png">
29+
</p>
30+
131
## References
232

333
I. J. Goodfellow et al., (2014). [*Generative Adversarial Networks*](https://arxiv.org/pdf/1406.2661)

images/results.png

390 KB
Loading

requirements.txt

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
kornia
2+
matplotlib
3+
numpy
4+
pyyaml
5+
torch
6+
torchvision
7+
tqdm

0 commit comments

Comments
 (0)