MED-CXX is a C++ project built with an object-oriented design, aiming to solve key medical imaging problems through deep learning. The project focuses on implementing and training several convolutional neural network (CNN) architectures — including U-Net, ResNet, and DenseNet — entirely in C++ using LibTorch (the PyTorch C++ API).
These models are applied to core medical tasks such as blood vessel segmentation, disease detection from medical scans, and image-based classification of pathological conditions.
MED-CXX was developed as part of the Object-Oriented Programming (OOP) lab course project at the Faculty of Mathematics and Computer Science, University of Bucharest, with a focus on clarity, reusability, and clean class structure.
- UNet, ResNet, and DenseNet implemented using LibTorch
- Blood vessel segmentation on DRIVE/STARE datasets with visualization
- Weighted BCE + Dice loss for handling class imbalance
- Model saving/loading utilities (
saveModel,loadModel) - GIF/video demo generator for showing side-by-side input / ground truth / prediction
- ImageLoader with caching, preprocessing, and grayscale/threshold conversion
- Modular class structure (
BaseModel,ImageLoader,Benchmark, etc.) - Metrics: Accuracy, Precision, Recall, F1 Score, IoU, MAE, Hausdorff
- Auto device selection (
CUDAorCPU)
- Unified
med-cxxrunner CLI viacommon::ArgParser - Config flags for:
--model: UNet / DenseNet / ResNet--train-dir: path to training images (and masks for segmentation)--test-dir: path to test images (and masks)--cls-train-dir: list of class-folders for classification--model-name: human-friendly name (prefix for weight files & demos)--weights: load existing.ptmodel file--skip-train: skip training if weights are provided--epochs, -e: number of training epochs--lr: learning rate--bce-weight: positive‐class weight for BCE loss--cuda: request GPU (falls back to CPU if unavailable)--resnet-ver: select ResNet18/34/50/101/152--video: generate demo video (segmentation)--fps: frames per second for demo video--hold: how many frames to hold each sample--bar-width: width of the training progress bar
- Extracted
BaseTrainer,SegmentationTrainer,ClassificationTrainer printProgressBarutility incommon::Utils- Clean separation: runners/, trainer/, common/, models/
- Design Patterns:
- Template Method
BaseModel::predictdefines the skeleton of inference, sub-classes (UNet, ResNet, DenseNet) fill in the details. - Composite
All of our layer modules (DoubleConv,Down,Up,BasicBlock, etc.) inherit from a common base and can be nested arbitrarily insideSequentialcontainers. - Factory
ArgParser+ runner’sswitch(cfg.modelType)cleanly instantiates differentBaseModelsubtypes. - Singleton (planned)
A single globalConfigobject to hold all CLI flags and share across trainers/visualizers. - Facade (planned)
BaseTrainer+SegmentationTrainer/ClassificationTrainerhide all the details of data loading, optimization, loss, metrics and video generation behind a simpletrain()/evaluate()interface.
- Template Method
# Show help
./med-cxx --help
# Segmentation: train & evaluate UNet
./med-cxx \
--model UNet \
--train-dir data/train \
--test-dir data/test \
--model-name vessels \
--epochs 200 \
--lr 1e-4 \
--bce-weight 3.0 \
--cuda \
--video \
--fps 2 \
--hold 3
# Classification: train & evaluate DenseNet on folder-per-class
./med-cxx \
--model DenseNet \
--cls-train-dir data/cls/train \
--cls-test-dir data/cls/test \
--model-name classification_run \
--epochs 50 \
--lr 5e-4 \
--skip-train \
--weights classification_run.pt
# Run ResNet50 inference only on CPU
./med-cxx \
--model ResNet \
--resnet-ver 50 \
--cls-test-dir data/cls/test \
--skip-train \
--weights resnet50_run.pt# Install dependencies
sudo apt update
sudo apt install cmake libopencv-dev build-essential
# Clone the repo
git clone https://github.com/IAMSebyi/med-cxx.git
cd med-cxx
# Download LibTorch (instead of <COMPUTE PLATFORM>, use cpu, cu118, cu126, cu128)
wget https://download.pytorch.org/libtorch/<COMPUTE PLATFORM>/libtorch-cxx11-abi-shared-with-deps-2.7.0%2B<COMPUTE PLATFORM>.zip
unzip libtorch-cxx11-abi-shared-with-deps-2.7.0%2B<COMPUTE PLATFORM>.zip
# Build the project
mkdir build && cd build
cmake ..
makegit clone https://github.com/IAMSebyi/med-cxx.git
cd med-cxxmkdir build && cd build
cmake ..
make5. Ensure *.dll files from libtorch/lib and opencv/build/x64/vcXX/bin are either in your PATH or next to your executable.
# Install dependencies
brew install cmake opencv
# Clone the repo
git clone https://github.com/IAMSebyi/med-cxx.git
cd med-cxx
# Download LibTorch (only CPU supported)
https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-2.7.0.zip
unzip libtorch-macos-arm64-2.7.0.zip
mkdir build && cd build
cmake ..
make- PyTorch Documentation
- OpenCV Documentation
- U-Net: Convolutional Networks for Biomedical Image Segmentation - Olaf Ronneberger, Philipp Fischer, Thomas Brox
- Deep Residual Learning for Image Recognition - Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
- Densely Connected Convolutional Networks - Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
- Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response - Adam Hoover, V. Kouznetsova, Michael Goldbaum
