Institution: Aristotle University of Thessaloniki (AUTH) — School of Electrical and Computer Engineering (ECE)
Course: Neural Networks & Deep Learning
This repository contains the code and reports for the three assignments of the Neural Networks & Deep Learning university course.
The first assignment focuses on building and evaluating Convolutional Neural Network (CNN) architectures for multiclass image classification on the CIFAR-10 dataset, with comparative analysis against classical baseline classifiers.
The task is to build and compare different classification models on the CIFAR-10 dataset across 10 categories. The notebook covers data loading, preprocessing, CNN architecture design, training, and performance evaluation. Results are compared against K-Nearest Neighbors (KNN) and Nearest Centroid (NC) classifiers.
The general workflow is as follows:
- Load Data: The CIFAR-10 dataset is loaded via
keras.datasets. - Preprocess Data: Images are normalized and labels are one-hot encoded.
- Define Model: CNN architectures are defined using the Keras Sequential API, with convolutional, pooling, and dense layers.
- Compile and Train: Models are compiled using the Adam optimizer and categorical cross-entropy loss.
- Evaluate: Trained models are evaluated on the test set for accuracy and loss.
- Visualize Results: Training and validation curves are plotted to assess learning progress.
- Baseline Comparison: KNN and Nearest Centroid classifiers are applied on flattened image vectors for comparison.
- Keras / TensorFlow: Used for building, compiling, and training the CNN models.
- NumPy: Numerical operations and array manipulation.
- Scikit-learn: Implementation of KNN and Nearest Centroid baseline classifiers.
- Matplotlib: Visualization of training curves and results.
| File | Description |
|---|---|
train_cnn_cifar10.ipynb |
Full pipeline: CNN architecture, training, evaluation, and comparison with KNN and NC baselines. |
report_cnn_cifar10.pdf |
Technical report documenting methodology, experiments, and results. |
The second assignment shifts focus from deep learning to classical machine learning methods. It implements and evaluates Support Vector Machines (SVMs) for binary and multiclass classification on CIFAR-10, with comparisons against a Multilayer Perceptron (MLP), KNN, and Nearest Centroid.
The workflow includes intensive preprocessing steps such as PCA for dimensionality reduction and hyperparameter tuning via grid search.
- Load Data: CIFAR-10 dataset is loaded via
keras.datasets. - Preprocess Data: Images are flattened, normalized, standardized using
StandardScaler, and reduced in dimensionality usingPCA. - Binary Classification: An SVM is trained on two selected CIFAR-10 classes.
- Multiclass Classification: SVM is extended to the full 10-class problem using one-vs-one strategy.
- Hyperparameter Tuning:
GridSearchCVis used to find optimal kernel type and regularization parameters. - MLP Comparison: A simple Multilayer Perceptron is built using
tensorflow.kerasas an additional comparison model. - Baseline Comparison: KNN and Nearest Centroid are applied as additional baselines.
- Keras / TensorFlow: Used for loading the CIFAR-10 dataset and building the MLP comparison model.
- Scikit-learn: SVM (
SVC),GridSearchCV,StandardScaler,PCA,accuracy_score,hinge_loss, KNN, Nearest Centroid. - NumPy: Numerical operations.
- Matplotlib: Visualization of results.
- Time: Tracking training and inference time.
| File | Description |
|---|---|
svm_classifier.ipynb |
Full pipeline: binary SVM, multiclass SVM, MLP, KNN, and NC classifiers with hyperparameter tuning and performance comparison. |
report_svm_cifar10.pdf |
Technical report documenting methodology, experiments, and results. |
The third assignment implements a Radial Basis Function Neural Network (RBFNN) from the ground up for classification tasks on the CIFAR-10 dataset, covering both binary and multiclass scenarios, and comparing against KNN and Nearest Centroid baselines.
Unlike the previous assignments that rely on Keras layers for the full model, this assignment involves a more hands-on implementation of the RBF layer itself.
- Load Data: CIFAR-10 is loaded via
tensorflow.keras.datasets. Classes can be selected for binary or multiclass experiments. - Preprocess Data: Images are converted to grayscale, flattened, standardized, and their dimensionality is reduced using PCA.
- RBF Center Selection: K-Means clustering is applied to the training data to determine the RBF network centers.
- Model Training: Hidden layer activations are computed using Gaussian radial basis functions, and output layer weights are trained.
- Evaluation: Model performance is assessed using accuracy score and confusion matrix on the test set.
- Parameter Analysis: The effect of the number of hidden centers and spread parameter is analyzed.
- Baseline Comparison: KNN and Nearest Centroid classifiers are applied for direct comparison.
- TensorFlow / Keras: Used for loading the CIFAR-10 dataset.
- Scikit-learn:
KMeansfor center selection,PCA,StandardScaler, KNN, Nearest Centroid,accuracy_score, confusion matrix. - NumPy: Core numerical operations for RBF activation computation.
- Matplotlib: Visualization of results, confusion matrices, and performance curves.
| File | Description |
|---|---|
train_rbfnn_classifier.ipynb |
Full pipeline: data preprocessing, K-Means center selection, RBF network training, evaluation, and baseline comparisons. |
report_rbfnn_classification.pdf |
Technical report documenting methodology, experiments, and results. |
Clone the repository and navigate to the desired project folder:
git clone https://github.com/eleanazeri/Neural-Networks-Deep-Learning.git
cd Neural-Networks-Deep-LearningOpen the notebook for the assignment of interest:
# Assignment 1 — CNN
cd 01-CNN-Image-Classification
jupyter notebook train_cnn_cifar10.ipynb
# Assignment 2 — SVM
cd 02-SVM-Image-Classification
jupyter notebook svm_classifier.ipynb
# Assignment 3 — RBFNN
cd 03-RBFNN-Classification
jupyter notebook train_rbfnn_classifier.ipynbInstall required dependencies via:
pip install numpy scikit-learn matplotlib tensorflow
This repository is submitted as academic coursework for the Neural Networks & Deep Learning course at the Aristotle University of Thessaloniki (ECE AUTH). All code and reports are the original work of the author. Redistribution or reuse for academic submission purposes is not permitted.