This repository contains the complete computational pipeline for investigating the relationship between individual anatomical features and electric field distribution in transcranial direct current stimulation (tDCS). The code supports the associated publication and enables reproduction of all analyses and figures.
The pipeline performs comprehensive analysis of how anatomical variability affects tDCS outcomes across different electrode montages and configurations. Key analyses include:
- Head modeling and segmentation: Automated processing of structural MRI data to create subject-specific head models
- Electric field simulation: Forward modeling of current flow and field distribution using finite element methods
- Anatomical feature extraction: Quantification of cortical morphology, tissue thickness, and gyrification metrics
- Statistical modeling: Generalized linear mixed models (GLMs) examining anatomy-field relationships
- Network-based analysis: Parcellation-based analysis using Schaefer and Yeo atlases
- Multivariate analysis: Principal component analysis of anatomical determinants
While developed for brain stimulation research, the statistical modules in src/3-statistics/ are domain-agnostic and JSON-configured. The GLM pipeline builds mixed-effects formulas from config keys, the network module aggregates spatial units into regions, and the PCA module (including automated biplot label clustering) operates on generic multivariate data. Any workflow involving spatially distributed multivariate data -- whether in neuroscience or any other field -- can adapt these tools by replacing the data loading layer.
For comprehensive documentation, installation guides, and step-by-step reproduction instructions, visit the GitHub Wiki.
The wiki includes:
- Getting started and installation guides
- Detailed pipeline architecture with diagrams
- Complete reproduction instructions
- Code reuse guidelines for researchers
- Statistical analysis documentation
- Configuration reference
If you use this code or build upon this work, please cite it appropriately. Giving credit helps the scientific community and enables continued open research.
If you use this pipeline, functions or any of the software components:
@software{dimitrios_stoupis_2026_17701802,
author = {Dimitrios Stoupis},
title = {dimst23/tdcs-anatomy-montage-optimization: tDCS
Anatomy-Based Montage Analysis Pipeline v1.1.0
},
month = jan,
year = 2026,
publisher = {Zenodo},
version = {v1.1.0},
doi = {10.5281/zenodo.17701802},
url = {https://doi.org/10.5281/zenodo.17701802},
}If your work relates to the anatomical determinants of tDCS or electric field modeling:
@article{Stoupis2026,
title = {Anatomy-informed recommendations for electrode montage and shape in electrical stimulation methods: A tDCS case study},
ISSN = {1741-2552},
url = {http://dx.doi.org/10.1088/1741-2552/ae4a4e},
DOI = {10.1088/1741-2552/ae4a4e},
journal = {Journal of Neural Engineering},
publisher = {IOP Publishing},
author = {Stoupis, Dimitrios and Assecondi, Sara},
year = {2026},
month = feb
}Note: You can cite just the code if you only used the software, or both if the research findings informed your work. The goal is simply to give credit where it's due and help others discover useful resources.
├── config/ # Configuration files
│ ├── 3-simulation/ # SimNIBS simulation settings
│ │ └── sim_settings_publication.json
│ ├── 3-statistics/ # Statistical analysis configurations
│ │ ├── data_analysis/ # GLM and network analysis settings
│ │ └── network_mappings/ # Parcellation scheme mappings
│ └── dwi_config.sh # Diffusion imaging configuration
│
├── data/ # Supplementary data (manuscript support)
│ ├── parcel_stats/ # Parcellated anatomical statistics by demographics
│ ├── parcel_stats_total.*.csv # Aggregate statistics across all subjects
│ ├── pca_results_pc1-3.csv # PCA loadings for new subject projection
│ ├── sample_input.csv # Example input data format
│ └── subject_ids.txt # Subject identifiers
│
├── src/
│ ├── 0-segmentation/ # Head tissue segmentation [README](src/0-segmentation/README.md)
│ │ ├── charm_run.sh # CHARM segmentation wrapper
│ │ └── settings/ # CHARM configuration files
│ │
│ ├── 2-analysis/ # Core analysis pipeline [README](src/2-analysis/README.md)
│ │ ├── 0-fitting/ # Surface reconstruction and atlas fitting
│ │ ├── 1-extraction/ # Feature extraction from surfaces
│ │ ├── 2-conversion/ # Data format conversion and standardization
│ │ └── 3-simulation/ # tDCS electric field simulation
│ │
│ ├── 3-statistics/ # Statistical analysis modules [README](src/3-statistics/README.md)
│ │ ├── common/ # Shared utilities and parsers
│ │ ├── glm/ # Generalized linear modeling
│ │ ├── networks/ # Network-based analysis
│ │ ├── pca/ # Principal component analysis
│ │ └── other/ # Additional analyses
│ │
│ └── hpc/ # High-performance computing scripts [README](src/hpc/instructions.md)
│ ├── charm_array.sh # Batch segmentation
│ ├── lgi_array.sh # Batch gyrification computation
│ ├── simulation_array.sh # Batch field simulation
│ └── instructions.md # HPC usage guide
│
├── assets/ # Visualization resources
│ └── networks/ # Network parcellation images
│
├── subject_correlations.r # Standalone correlation analysis tool (see below)
├── pyproject.toml # Python dependencies
└── poetry.lock # Locked Python environment
Core Requirements:
- SimNIBS 4.0+ (head modeling and field simulation)
- FreeSurfer 6.0+ (cortical reconstruction)
- Python 3.11+ with Poetry for dependency management
- R 4.0+ with RStudio (recommended for statistical analyses)
Python Packages (managed via Poetry):
numpy,scipy,pandas- numerical computingnibabel,meshio,h5py- neuroimaging data I/Omatplotlib,seaborn,pyvista- visualizationneuromaps- brain map transformations
R Packages (managed via renv):
- Core:
tidyverse,dplyr,ggplot2,jsonlite - Statistical modeling:
lme4,car,emmeans,performance - Visualization:
ggh4x,pheatmap,ggpubr - Neuroimaging:
freesurferformats,fsbrain
This pipeline requires preprocessed structural MRI data:
- T1-weighted images (preferably ACPC-aligned)
- T2-weighted images (optional, improves segmentation)
- Existing FreeSurfer reconstructions (for cortical surface analysis)
The data/ directory contains supplementary anatomical statistics used in the manuscript analyses.
# Clone the repository
git clone https://github.com/dimst23/tdcs-anatomy-montage-optimization.git
cd tdcs-anatomy-montage-optimization
# Install Python dependencies using Poetry
poetry install
# Activate the environment
poetry shellcd src/3-statistics
# Open in RStudio
open 3-statistics.Rproj
# In R console, initialize package management
install.packages("renv")
renv::init()
renv::restore() # Install all required packagesSee the Getting Started wiki page for detailed environment setup.
The pipeline follows a sequential workflow across four main stages:
Head tissue segmentation using SimNIBS CHARM (Complete Head Anatomy Reconstruction Method).
cd src/0-segmentation
./charm_run.sh \
--subject <SUBJECT_ID> \
--charm-executable <PATH_TO_CHARM> \
--charm-settings-path settings/settings_base.ini \
--fs-subj-dir <FREESURFER_SUBJECTS_DIR> \
--run-dir <OUTPUT_DIR> \
--t1-path <T1_IMAGE> \
--t2-path <T2_IMAGE>Output: SimNIBS head models (m2m_<subject>/) with tissue segmentations and tetrahedral meshes.
See src/0-segmentation/README.md for detailed documentation, configuration options, and troubleshooting.
Multi-step processing of anatomical features and field simulations. See src/2-analysis/README.md for comprehensive documentation of all analysis stages.
Surface reconstruction and atlas registration:
cd src/2-analysis/0-fitting
./0-0_fit_fs_atlas.sh <SUBJECT_ID> <FREESURFER_DIR>
./0-1_local_gi_index.sh <SUBJECT_ID> <FREESURFER_DIR>Feature extraction from surfaces:
cd src/2-analysis/1-extraction
./1-0_fs_stats.sh <SUBJECT_ID>
./3-0_simnibs_stats.sh <SUBJECT_ID> <M2M_DIR>Data standardization to FSAverage space:
cd src/2-analysis/2-conversion
./2-1_fs_to_fsaverage.sh <SUBJECT_ID>
./2-2_simnibs_to_fsaverage.sh <SUBJECT_ID>tDCS electric field simulation:
cd src/2-analysis/3-simulation
# Run simulation using configuration
python main_sim.py <M2M_MODEL_PATH> ../../config/3-simulation/sim_settings_publication.jsonThe simulation configuration (sim_settings_publication.json) defines:
- Electrode montages (e.g., F3-Fp2, P3-P4)
- Electrode types (rectangular, circular, various sizes)
- Current parameters (typically 1 mA)
- Output fields (electric field magnitude, normal component, current density)
Statistical analysis performed in R, using JSON configuration files for reproducibility. See src/3-statistics/README.md for complete analysis documentation and usage examples.
Examine relationships between anatomical features and field distribution:
cd src/3-statistics
Rscript glm/main.r \
../../config/3-statistics/data_analysis/glm_within/glm_analysis_within_Emagn.json \
<BASE_DATA_DIR> \
<HEMISPHERE> \
<SIMULATION_TYPE> \
[within|across]Key features:
- Mixed-effects models with random effects for network and parcel
- Anatomical predictors: cortical thickness, surface area, gyrification, sulcal depth, tissue thicknesses
- Interaction terms between correlated features
- Z-statistic brain maps with FDR correction
Parcellation-based analysis using Schaefer/Yeo atlases:
Rscript networks/main.r \
../../config/3-statistics/data_analysis/F3_network_analysis_schaefer_7_magnitude.json \
<BASE_DATA_DIR> \
<SUBJECT_FILE>Analyzes field distribution patterns across functional networks.
Dimensionality reduction of anatomical determinants:
Rscript pca/main.r \
<JSON_CONFIG> \
<BASE_DATA_DIR># Feature importance analysis
Rscript other/feature_importance.r <JSON_CONFIG> <BASE_DIR>
# Population statistics
Rscript other/population_stats.r
# Spatial location analysis
Rscript other/spatial_location.rAll analyses are controlled via JSON configuration files in config/3-statistics/. These files specify:
- Paths: Data locations, output directories, atlas files
- Parameters: Statistical thresholds, scaling factors, hemisphere selection
- Model specification: GLM equations, interaction terms, random effects
- Visualization: Figure dimensions, color scales, annotation settings
Example configuration structure:
{
"general": {
"paths": { "fsaverage_template": "..." },
"cortex_only": true,
"statistic": "z_fdr"
},
"glm": {
"regressors": { "thickness": "Cortical Thickness", ... },
"interactions": [["thickness", "area"], ...],
"z_threshold": 1.96
},
"simulation": {
"field": { "type": { "E.magn": "Magnitude" } },
"simulation_types": { "rect_5x7-F3-Fp2": "rF3-Fp2", ... }
}
}Scripts in src/hpc/ are designed for SLURM-based computing clusters. They implement array jobs for parallel processing across subjects.
# Submit segmentation array job
sbatch --array=1-100%10 hpc/charm_array.sh
# Submit simulation array job
sbatch --array=1-100%10 hpc/simulation_array.shThe %10 parameter limits concurrent jobs. Adjust array indices based on your subject count.
See src/hpc/instructions.md for detailed HPC usage.
The data/ directory contains supplementary anatomical statistics referenced in the manuscript:
-
parcel_stats/: Parcellated anatomical measures stratified by:- Age bins (35-40, 40-45, ..., 75-80 years)
- Sex (M/F)
- Using Schaefer 500-parcel 17-network atlas
- Smoothed with 10mm FWHM kernel on fsaverage
-
parcel_stats_total.*.csv: Aggregate statistics across all N=590 subjects -
sample_input.csv: Example data format for pipeline inputs
These files enable reproduction of population-level analyses without requiring access to raw imaging data.
The repository includes subject_correlations.r, a standalone R script for researchers to quantify relationships between PCA component loadings and morphological variables. This tool enables reproduction of the anatomical-PCA correlation analyses described in the manuscript.
# Basic usage with population data
Rscript subject_correlations.r \
--pca pca_loadings_per_parcel.csv \
--input data/parcel_stats_total.fsaverage.fwhm10.Schaefer2018_500Parcels_17Networks_order.n_590.csv \
--type 1 \
--output results.csv- Two input formats: Population-level data (long format with
variableandmeancolumns) or individual subject data (wide format with one column per feature) - Spearman correlations: Robust to non-normality, captures monotonic relationships
- Multiple testing correction: FDR correction using Benjamini-Hochberg procedure (optional)
- Self-contained: Only requires
tidyverseR package, no other dependencies
- R 4.0+
tidyversepackage:install.packages("tidyverse")
Complete usage instructions, examples, and interpretation guidance are provided in the supplementary materials (Section: PCA Component-Morphology Correlation Analysis). The script includes built-in help:
Rscript subject_correlations.r # Display usage informationThe pipeline generates:
- Head models: Subject-specific finite element meshes
- Field maps: Electric field distributions in subject space and fsaverage
- Anatomical features: Vertex-wise and parcel-averaged morphometric data
- Statistical maps: Z-statistic brain maps showing anatomy-field associations
- Figures: Publication-ready visualizations (PNG, PDF, EPS formats)
- Model summaries: GLM coefficients, significance tests, performance metrics
Authors: Dimitrios Stoupis (DS) & Sara Assecondi (SA)
Results presented in this work have been partially produced using the Aristotle University of Thessaloniki (AUTh) High Performance Computing Infrastructure (https://it.auth.gr/en/hpc).
Research reported in this publication was supported by the National Institute On Aging of the National Institutes of Health under Award Number U01AG052564 and by funds provided by the McDonnell Center for Systems Neuroscience at Washington University in St. Louis. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
SA is named inventor on a patent application (publication number WO/2022/106850) jointly submitted by the University of Birmingham and Dalhousie University, titled “Improving cognitive function,” currently in the PCT phase (international application No. PCT/G82021/053019)
SA is supported by Fondazione Cassa di Risparmio di Trento e Rovereto (CARITRO) [grant number 40105185]
This project is licensed under the MIT License - see the LICENSE file for details.
We welcome contributions from the research community! Whether you're fixing bugs, improving documentation, or extending the pipeline with new features, your input helps advance open neuroscience.
- Report bugs or issues: Open an issue describing the problem, including error messages and steps to reproduce
- Suggest enhancements: Propose new features, analysis methods, or improvements to existing code
- Improve documentation: Help clarify installation steps, add examples, or expand the wiki
- Submit code: Bug fixes, performance optimizations, or new analytical capabilities
- Share feedback: Let us know how you're using the pipeline and what could make it more useful
- Fork the repository and create a new branch for your contribution
- Follow existing code style:
- Python: Use
blackfor formatting,flake8for linting - R: Follow
tidyversestyle guide - Add docstrings/comments for non-obvious logic
- Python: Use
- Test your changes: Ensure existing functionality isn't broken
- Update documentation: Modify relevant README or wiki pages if adding features
- Submit a pull request: Describe what changes you made and why
- Configuration-driven design: New analyses should use JSON config files (see
config/directory) - Reproducibility: Document dependencies, random seeds, and data transformations
- Citation: If building upon this work, include appropriate citations (see Citation & Usage section)
Before starting major contributions, consider opening an issue to discuss your approach. This helps ensure your effort aligns with the project's goals and avoids duplicate work.
For questions or issues:
- Primary documentation: Visit the GitHub Wiki for comprehensive guides and tutorials
- Review the detailed documentation in module-specific README files:
- Segmentation:
src/0-segmentation/README.md - Analysis Pipeline:
src/2-analysis/README.md - Statistical Analysis:
src/3-statistics/README.md - HPC Usage:
src/hpc/instructions.md
- Segmentation:
- Check configuration examples in
config/ - Refer to individual script help messages (e.g.,
./charm_run.sh --help)
- SimNIBS: 4.1
- FreeSurfer: 6.0
- Python: 3.11
- R: 4.5.2
Specific package versions are locked in poetry.lock (Python) and managed via renv (R).