brainseg is a PyQt6 desktop application for brain MRI abnormality segmentation. It has a friendly multi-view interface with a PyTorch UNet backend, making it easy to inspect images, run inference with custom .pth models, compare against ground truth, and review statistics in one place.
README file update is in progress.
- Highlights
- Screenshots
- Getting Started
- Using the App
- Model Management
- Customization & Settings
- Troubleshooting & Notes
- Development Workflow
- Roadmap
- Contributing
- License
- Three synchronized canvases for original, mask, and highlighted overlay views with zoom/pan controls.
- Brightness and contrast tuning that preserves zoom state for precise inspection.
- One-click segmentation using your own
.pthcheckpoint; progress bar shows inference status. - Ground-truth loader with thumbnail preview and automatic resizing for metric comparison.
- Research-style statistics window (Dice, Jaccard, Hausdorff, latency, memory, trend plots).
- Theme, accent, and custom color palette so users can adapt the UI to ambient lighting.
- Robust error handling that surfaces detailed model-loading issues via dialogs.
- Windows (tested) with Python 3.10
- PyTorch build that matches your hardware (CPU-only works; CUDA recommended for speed)
- Qt runtime provided by PyQt6 (installed automatically through
requirements.txt)
git clone https://github.com/raselmandol/brainseg.git
cd brainseg
python -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install -r requirements.txtpip install -e .python -m brainseg
# or run the legacy script if you prefer
python app.pyTip:
pip install -e .adds thebrainsegconsole entry point so you can launch the app from anywhere.
- Open Image – load an MRI slice (
.png,.jpg,.tif,.bmp). The original view updates immediately, honoring current brightness/contrast settings. - Adjust Brightness/Contrast – use the sliders on the left dock; zoom/pan state stays intact.
- Load Ground Truth (optional) – import a mask (
.png,.tif). A thumbnail appears under the filename label. - Select Model File – pick a segmentation checkpoint (
.pth). The app validates the file right away and reports issues in a modal dialog if the architecture is incompatible. - Run Segmentation – click Run Segmentation or press
Ctrl+Alt+R. The progress bar indicates inference status, and statistics are recorded automatically. - Save Results – export the mask or highlighted overlay using the corresponding buttons or File menu actions.
- Default checkpoint:
brain_segmentation_model.pth. - Validation: selecting a model triggers a quick load test; clear dialogs explain missing keys or shape mismatches.
- Swapping models: re-open Select Model File at any time; the new model is cached for subsequent runs.
- Theme toggle (light/dark) plus accent selector (Azure, Emerald, Amber, Rose).
- Custom palette picker applies a brand color across toolbars, docks, and controls in real time.
- Image adjustments -- the Settings dialog mirrors the dock sliders, so you can tweak values from either location.
- Preferences currently apply per session; wiring to
QSettingsis on the roadmap.
- Invalid model: You'll see "Model Load Error" with the traceback if the
.pthdoesn't match the expected UNet shape. - Inference failure: "Segmentation Error" dialogs include the captured stack trace. Re-select a compatible model or verify PyTorch/torchvision versions.
- Large images: Internally resized to the nearest multiple of 32 to satisfy encoder constraints.
- Performance tips: Use the CUDA-enabled PyTorch wheel when a GPU is available; CPU runs are slower but supported.
# Install in editable mode
pip install -e .
# Run the GUI from source
python -m brainseg
# Quick sanity check (syntax)
python -m py_compile brainseg\model.py brainseg\main_window.py- Persist user preferences (theme, accent, last model path) via
QSettings. - Allow background model validation to keep the UI responsive with >1 GB checkpoints.
- Add CLI hooks or REST mode for batch inference.
- Extend statistics window with export-to-CSV and per-run notes.
- ONNX Runtime support
- Segmentation list
Pull requests and feature suggestions are welcome. Please file an issue describing the change, branch from main, and keep PRs focused. For major UI adjustments, sharing mockups or screenshots helps align expectations.
This project is released under the MIT License.
Maintainer: Md. Rasel Mandol (Smart Systems & Connectivity Lab, NIT Meghalaya)

