Background/Objectives: Multimodal data fusion is increasingly applied in neuroinformatics to integrate heterogeneous sources of information. However, the optimal strategies for combining modalities with markedly different dimensionality, scale, and noise characteristics remain unclear. To our knowledge, this is among the first systematic and controlled benchmarks explicitly disentangling the effects of fusion strategy and feature scaling within a unified deep learning framework. Methods: Using data from 747 healthy participants from the Human Connectome Project, we evaluated multiple fusion paradigms—including early fusion, attention-based fusion, subspace-based fusion, and graph-based fusion—within a unified and reproducible framework. Importantly, we assessed how different feature scaling techniques (Standard, Min–Max, and Robust scaling) interact with fusion strategies and influence model performance. Biological sex was used as a controlled benchmark task to focus on methodological insights rather than task-specific optimization. Results: Early feature-level fusion consistently achieved the highest classification performance across all evaluated configurations. In particular, direct concatenation of cognitive and neuroimaging features combined with Standard Scaling yielded the best results (AUC–ROC = 0.96 (0.95–0.96)), outperforming unimodal baselines as well as intermediate and late fusion strategies. Conclusions: This systematic benchmark demonstrates that multimodal deep learning performance in neuroscience is driven primarily by the interaction between fusion strategy and feature scaling rather than by architectural complexity alone. By explicitly disentangling the effects of fusion level and preprocessing within a unified framework, this study provides practical methodological guidance for the design, evaluation, and reproducible deployment of multimodal deep learning models in neuroscience.

Benchmarking Multimodal Deep Fusion Strategies for Heterogeneous Neuroimaging and Cognitive Data Using a Controlled Sex Classification Task

Camastra, Chiara;Pelagi, Assunta;Quattrone, Andrea;Sarica, Alessia
2026-01-01

Abstract

Background/Objectives: Multimodal data fusion is increasingly applied in neuroinformatics to integrate heterogeneous sources of information. However, the optimal strategies for combining modalities with markedly different dimensionality, scale, and noise characteristics remain unclear. To our knowledge, this is among the first systematic and controlled benchmarks explicitly disentangling the effects of fusion strategy and feature scaling within a unified deep learning framework. Methods: Using data from 747 healthy participants from the Human Connectome Project, we evaluated multiple fusion paradigms—including early fusion, attention-based fusion, subspace-based fusion, and graph-based fusion—within a unified and reproducible framework. Importantly, we assessed how different feature scaling techniques (Standard, Min–Max, and Robust scaling) interact with fusion strategies and influence model performance. Biological sex was used as a controlled benchmark task to focus on methodological insights rather than task-specific optimization. Results: Early feature-level fusion consistently achieved the highest classification performance across all evaluated configurations. In particular, direct concatenation of cognitive and neuroimaging features combined with Standard Scaling yielded the best results (AUC–ROC = 0.96 (0.95–0.96)), outperforming unimodal baselines as well as intermediate and late fusion strategies. Conclusions: This systematic benchmark demonstrates that multimodal deep learning performance in neuroscience is driven primarily by the interaction between fusion strategy and feature scaling rather than by architectural complexity alone. By explicitly disentangling the effects of fusion level and preprocessing within a unified framework, this study provides practical methodological guidance for the design, evaluation, and reproducible deployment of multimodal deep learning models in neuroscience.
2026
early fusion
feature scaling
heterogeneous data
human connectome project
intermediate fusion
late fusion
multimodal data fusion
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12317/117820
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact