Start Training a State of the Art Image Classifier within Minutes with Zero Coding Knowledge - Now with TensorFlow and PyTorch Support!
Demo Video
·
Docker Image
·
Report Bug
·
Request Feature
- Table of Contents
- About The Project
- Demo
- Features
- Hardware Requirements
- Getting Started
- Setup and Usage
- Framework Guide
- Changelog
- Roadmap
- Contributing
- License
- Contact
Containerized Tensorflow-based image classification training utility with Streamlit-based interface designed to choose between common architectures and optimizers for quick hyperparameter tuning, which drastically lowers experimentation time.
YouTube Video Link: https://youtu.be/gbuweKMOucc
- No Coding Required - I have said this enough, I will repeat one last time: No need to touch any programming language, just a few clicks and start training!
- Easy to use UI Interface - Built with Streamlit, it is a very user friendly, straight forward UI that anybody can use with ease. Just a few selects and a few sliders, and start training. Simple!
- Live and Interactive Plots - Want to know how your training is progressing? Easy! Visualize and compare the results live, on your dashboard and watch the exponentially decaying loss curve build up from scratch!
- Multi-Framework Support - Now supports both TensorFlow and PyTorch! Choose the framework that works best for you.
- Multiple Docker Images - Three optimized Docker images available:
- TensorFlow-only: Lightweight image with only TensorFlow
- PyTorch-only: Lightweight image with only PyTorch
- Both Frameworks: Complete image with both TensorFlow and PyTorch
- Best Practices - Implements best practices for both frameworks including:
- Mixed Precision Training (AMP for PyTorch, mixed_float16/bfloat16 for TensorFlow)
- Learning Rate Scheduling
- Early Stopping
- Model Checkpointing
- TensorBoard Logging
If you want to go in-depth with the Technical Details, then there are too many to list here. I would invite you to check out the Changelog where every feature is mentioned in details.
We recommend an Nvidia GPU for Training, However, it can work with CPUs as well (Not Recommended)
Google Cloud TPUs are Supported as per the code, however, the same has not been tested.
- CPU: AMD Ryzen 7 3700X - 8 Cores 16 Threads
- GPU: Nvidia GeForce RTX 2080 Ti 11 GB
- RAM: 32 GB DDR4 @ 3200 MHz
- Storage: 1 TB NVMe SSD
- OS: Ubuntu 20.10
The above is just used for development and by no means is necessary to run this application. The Minimum Hardware Requirements are given in the next section
- CPU: AMD/Intel 4 Core CPU (Intel Core i3 4th Gen or better)
- GPU: Nvidia GeForce GTX 1650 4 GB (You can go lower, but I would not recommend it)
- RAM: 8 GB (Recommended 16 GB)
- Storage: Whatever is required for Dataset Storage + 10 GB for Docker Image
- OS: Any Linux Distribution
- Install Docker Engine
- Install Nvidia Docker Engine (Required only for System with Nvidia GPU)
- Set up the Dataset Structure:
.
├── Training
│ ├── class_name_1
│ │ └── *.jpg
│ ├── class_name_2
│ │ └── *.jpg
│ ├── class_name_3
│ │ └── *.jpg
│ └── class_name_4
│ └── *.jpg
└── Validation
├── class_name_1
│ └── *.jpg
├── class_name_2
│ └── *.jpg
├── class_name_3
│ └── *.jpg
└── class_name_4
└── *.jpgIf you don't have your own dataset ready, the toolkit supports downloading common image classification datasets (CIFAR10, CIFAR100, MNIST, FashionMNIST, STL10) and preparing them in the required folder-per-class layout.
Example (Streamlit UI progress integration):
import streamlit as st
from core.data_loader_pytorch import ImageClassificationDataLoaderPyTorch
from utils.add_ons_pytorch import make_streamlit_progress_callback
st.title('Preset Dataset Download')
cb = make_streamlit_progress_callback(prefix='Downloading dataset')
# This will download CIFAR10 into ./data/CIFAR10 (if not present) and show progress in Streamlit
dl = ImageClassificationDataLoaderPyTorch(
data_dir='./data/CIFAR10',
image_dims=(224,224),
preset_name='CIFAR10',
preset_target_dir='./data/CIFAR10',
progress_callback=cb,
)
st.write('Dataset ready at:', dl.data_dir)Or use from Python (no Streamlit callback):
from core.data_loader_pytorch import ImageClassificationDataLoaderPyTorch
# download into ./data/MNIST and prepare folder layout automatically
dl = ImageClassificationDataLoaderPyTorch(
data_dir='./data/MNIST',
preset_name='MNIST',
preset_target_dir='./data/MNIST',
)
dataloader, dataset = dl.create_dataloader(batch_size=32, augment=False)-
Choose your Docker image based on your needs:
Option A: Pull from Docker Hub (when available)
# For TensorFlow only docker pull animikhaich/zero-code-classifier:tensorflow # For PyTorch only docker pull animikhaich/zero-code-classifier:pytorch # For both frameworks docker pull animikhaich/zero-code-classifier:both
Option B: Build locally
# Clone the repository git clone https://github.com/animikhaich/No-Code-Classification-Toolkit.git cd No-Code-Classification-Toolkit # Build all images bash build-all.sh # Or build individual images: # TensorFlow only docker build -f Dockerfile.tensorflow -t animikhaich/zero-code-classifier:tensorflow . # PyTorch only docker build -f Dockerfile.pytorch -t animikhaich/zero-code-classifier:pytorch . # Both frameworks docker build -f Dockerfile.both -t animikhaich/zero-code-classifier:both .
-
Run the Docker container:
# For TensorFlow docker run -it --gpus all --net host -v /path/to/dataset:/data animikhaich/zero-code-classifier:tensorflow # For PyTorch docker run -it --gpus all --net host -v /path/to/dataset:/data animikhaich/zero-code-classifier:pytorch # For both frameworks docker run -it --gpus all --net host -v /path/to/dataset:/data animikhaich/zero-code-classifier:both
Note: Use
--gpus allfor newer Docker versions, or--runtime nvidiafor older versions with nvidia-docker. -
After training the trained weights can be found at:
/app/model/weightsInside the Container -
After training the Tensorboard Logs can be found at:
/app/logs/tensorboardInside the Container -
You can use
docker cp <container-name/id>:<path-inside-container> <path-on-host-machine>to get the weights and logs out. Further details can be found here: Docker cp Docs
For detailed information about choosing between TensorFlow and PyTorch, available models, optimizers, and best practices, see the Framework Guide.
See the Changelog.
See the Open Issues for a list of proposed features (and known issues).
See the Changelog a lost of changes currently in development.
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature) - Commit your Changes (
git commit -m 'Add some AmazingFeature') - Push to the Branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Distributed under the GNU AGPL V3 License. See LICENSE for more information.
- Website: Animikh Aich - Website
- LinkedIn: animikh-aich
- Email: animikhaich@gmail.com
- Twitter: @AichAnimikh

