Although automatic prediction of Alzheimer’s disease (AD) from Magnetic Resonance Imaging (MRI) showed excellent performance, Machine Learning (ML) algorithms often provide high accuracy at the expense of interpretability of findings. Indeed, building ML models that can be understandable has fundamental importance in clinical context, especially for early diagnosis of neurodegenerative diseases. Recently, a novel interpretability algorithm has been proposed, the Explainable Boosting Machine (EBM), which is a glassbox model based on Generative Additive Models plus Interactions GA2Ms and designed to show optimal accuracy while providing intelligibility. Thus, the aim of present study was to assess – for the first time – the EBM reliability in predicting the conversion to AD and its ability in providing the predictions explainability. In particular, two-hundred brain MRIs from ADNI of Mild Cognitive Impairment (MCI) patients equally divided into stable (sMCI) and progressive (pMCI) were processed with Freesurfer for extracting twelve hippocampal subfields volumes, which already showed good AD prediction power. EBM models with and without pairwise interactions were built on training set (80%) comprised of these volumes, and global explanations were investigated. The performance of classifiers was evaluated with AUC-ROC on test set (20%) and local explanations of four randomly selected test patients (sMCIs and pMCIs correctly classified and misclassified) were given. EBMs without and with pairwise interactions showed accuracies of respectively 80.5% and 84.2%, thus demonstrating high prediction accuracy. Moreover, EBM provided practical clinical knowledge on why a patient was correctly or incorrectly predicted as AD and which hippocampal subfields drove such prediction.
Explainable Boosting Machine for Predicting Alzheimer’s Disease from MRI Hippocampal Subfields
Sarica A.
;Quattrone A.;Quattrone A.
2021-01-01
Abstract
Although automatic prediction of Alzheimer’s disease (AD) from Magnetic Resonance Imaging (MRI) showed excellent performance, Machine Learning (ML) algorithms often provide high accuracy at the expense of interpretability of findings. Indeed, building ML models that can be understandable has fundamental importance in clinical context, especially for early diagnosis of neurodegenerative diseases. Recently, a novel interpretability algorithm has been proposed, the Explainable Boosting Machine (EBM), which is a glassbox model based on Generative Additive Models plus Interactions GA2Ms and designed to show optimal accuracy while providing intelligibility. Thus, the aim of present study was to assess – for the first time – the EBM reliability in predicting the conversion to AD and its ability in providing the predictions explainability. In particular, two-hundred brain MRIs from ADNI of Mild Cognitive Impairment (MCI) patients equally divided into stable (sMCI) and progressive (pMCI) were processed with Freesurfer for extracting twelve hippocampal subfields volumes, which already showed good AD prediction power. EBM models with and without pairwise interactions were built on training set (80%) comprised of these volumes, and global explanations were investigated. The performance of classifiers was evaluated with AUC-ROC on test set (20%) and local explanations of four randomly selected test patients (sMCIs and pMCIs correctly classified and misclassified) were given. EBMs without and with pairwise interactions showed accuracies of respectively 80.5% and 84.2%, thus demonstrating high prediction accuracy. Moreover, EBM provided practical clinical knowledge on why a patient was correctly or incorrectly predicted as AD and which hippocampal subfields drove such prediction.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.