This project explores the use of 3D Convolutional Neural Networks (3D CNNs) for early detection of Alzheimerβs Disease (AD) using structural magnetic resonance imaging (sMRI). Inspired by and building upon the work of Liu et al. (2022), we aim to not only replicate but enhance their deep learning architecture using high-performance computing (HPC) resources, particularly the MeluXina Supercomputer.
The model classifies subjects into three categories:
- Cognitively Normal (CN)
- Mild Cognitive Impairment (MCI)
- Mild AD Dementia (AD)
Our work highlights the value of deep learning in automating and improving the diagnostic process for Alzheimerβs Disease, enabling scalable and efficient MRI-based screening.
- Re-implemented and validated Liu et al.βs 3D CNN model on the ADNI dataset
- Used the Clinica software suite for standardized MRI preprocessing in BIDS format
- Integrated data augmentation techniques (Gaussian blurring, random cropping)
- Leveraged MeluXina HPC for full-scale GPU-based training and evaluation
- Achieved promising classification results with improved performance on the original paper
Note: Figure shows a placeholder representation of the deep learning architecture.
The model architecture consists of:
- Multiple 3D convolutional blocks with normalization steps and ReLU activations
- Fully connected layers for classification
- Cross-entropy loss optimized with Adam
Dataset: ADNI
| session_id | participant_id | sex | original_study | diagnosis | ... |
|---|---|---|---|---|---|
| ses-M006 | sub-ADNI052S0671 | F | ADNI1 | LMCI | ... |
| ses-M000 | sub-ADNI109S0967 | M | ADNI1 | CN | ... |
| ses-M000 | sub-ADNI027S0850 | M | ADNI1 | AD | ... |
To collect the MRI scans and utilize them correctly to train the model please refer to INSTALL.md
MRI scans were processed using the Clinica software suite:
- Convert to BIDS format
- Generate a template from the training set
- Apply spatial normalization using the template
- Apply intensity normalization to reduce scanner bias
This pipeline ensures data consistency across training, validation, and testing sets.
To improve generalization and model robustness, we applied:
- Gaussian Blurring
- Random Cropping
Augmentation is performed on-the-fly during training.
We worked on the GPU-enabled MeluXina system provided by EuroHPC.
# ~/.ssh/config
Host meluxina
Hostname login.lxp.lu
User <user_id>
Port 8822
IdentityFile ~/.ssh/id_ed25519_mel
IdentitiesOnly yes
ForwardAgent noTo connect simply type on the command line
ssh meluxina-
Large amount of GPU hours available
-
Support for large batch sizes
-
GPU parallelization capabilities
-
Extended memory for ~1TB datasets
- Loss Function: Cross-Entropy
- Optimizer: Adam
- Normalization: InstanceNorm / BatchNorm
Most of the model parameters can be tuned by modifying the config.yaml file.
NOTE: Placeholder section. Insert metrics when available.
Expected outcomes based on Liu et al.:
- AUC > 89.21% for AD classification
- Improved performance over ROI-volume/thickness-based models
- Demonstrated progression prediction capabilities
We implemented Grad-CAM to interpret model decisions and highlight the most discriminative brain regions contributing to each prediction (CN / MCI / AD). The visualization pipeline generates GIF overlays for axial, coronal, and sagittal slices, color-coded by predicted diagnosis:
- π© Green = CN
- π¨ Yellow = MCI
- π₯ Red = AD
Additionally, an optional hippocampal crop (based on the AAL atlas) is used to focus on clinically relevant areas, automatically handled via nilearn.
To define which part of the Grad-CAM heatmap is considered "active", the script supports multiple thresholding strategies, selectable via --threshold_mode:
pct: retains the top-N% activations (configurable via--pct, e.g.,--pct 60for 60%)otsu: uses Otsuβs adaptive methodstd: keeps voxels with activation > mean + k Γ std (--std_k 0.5, etc.)
This interpretability module offers visual insight into model behavior, improves clinical trust, and highlights class-specific decision regions.
βββ python/ # Python Model
β βββ model.py # CNN architecture
β βββ dataset.py # Dataset preparation
β βββ test.py # Test perfomance script
β βββ test.sh # Shell script for test.py
β βββ train.py # Main training script
β βββ train.sh # Shell script for train.py
βββ cpp/ # C++ Model
β βββ CMakeLists.txt # CMake config file
β βββ model.h # CNN architecture
β βββ dataset.h # Dataset preparation
β βββ config.h # Parameters config
β βββ test.cpp # Test perfomance script
β βββ test.sh # Shell script for test.cpp
β βββ train.cpp # Main training script
β βββ train.sh # Shell script for train.cpp
βββ utils/ # Other code
β βββ plot_metrics.py # Plot loss, Plot accuracy
β βββ plot_metrics.sh # Shell script for plot_metrics.py
β βββ spm_get_doc.m # MATLAB script for Nipype troubleshooting
βββ preprocess/ # Preprocessing scripts
β βββ run_convert.sh # ADNI -> BIDS convertion
β βββ run_adni_preproc.sh # T1-volume segmentation on training set
β βββ run_adni_valtest.sh # T1-volume segmentation on val & test set
βββ data/ # Diagnosis datasets
β βββ participants_Test.tsv # Subjects in the test set
β βββ participants_Train.tsv # Subjects in the train set
β βββ participants_Val.tsv # Subjects in the validation set
βββ envs/ # Conda Environments
β βββ clinicaEnv.yml # Conda Env for Clinica
β βββ gradcamEnv.yml # Conda Env for Grad-CAM
β βββ trainEnv.yml # Conda Env for Training
βββ gradcam/ # Grad-CAM visualization
β βββ gradcam_out/ # Folder containing Grad-CAM outputs
β βββ visualize.py # Script for Grad-CAM visualization
β βββ gradcam_otsu.sh # Shell script for visualize.py w/Otsu
β βββ gradcam_std.sh # Shell script for visualize.py w/std
β βββ gradcam_pct.sh # Shell script for visualize.py w/pct
βββ results/ # Output files containing Test results
β βββ cpp/ # Folder containing C++ results
β βββ python/ # Folder containing Python results
β βββ Test_Results.ipynb # Notebook for model evaluation metrics
βββ media/ # Images/GIFs/...
βββ config.yaml # Model hyperparameters
βββ INSTALL.md
βββ README.md
βββ report.pdf-
Liu et al. for their foundational model and research
-
MeluXina Support Team for infrastructure and consultation
-
Clinica Developers for powerful neuroimaging tools
-
MOX Lab @ Politecnico di Milano for support and guidance
βββ Vittorio Pio Remigio Cozzoli, Student, Politecnico di Milano
β βββ [email protected]
βββ Tommaso Crippa, Student, Politecnico di Milano
β βββ [email protected]
βββ Alberto Taddei, Student, Politecnico di Milano
β βββ [email protected]All rights reserved.
This repository contains research code developed for academic purposes only.
Images and medical data used in this project are derived from publicly available datasets (ADNI), that are not linked to any identifiable individuals.
Nevertheless, as the subject matter involves sensitive neuroimaging data, we kindly ask users to treat visual outputs with scientific respect.
Any clinical, diagnostic, or commercial usage of this code or its outputs is strictly prohibited without prior written permission.
If you are a researcher or instructor and wish to reuse this material for non-commercial academic use, please contact the authors.

