Alzheimer’s disease classification based on multimodal consistent distribution and trusted fusion
Multimodal data fusion has the potential to improve Alzheimer’s disease (AD) classification by capturing diverse disease manifestations. However, existing methods often fail to address the heterogeneity and reliability differences across modalities, limiting their effectiveness. This paper proposes a novel AD classification framework based on multimodal consistent distribution and trusted fusion. Our approach projects structural magnetic resonance imaging (sMRI), fluorodeoxyglucose positron emission tomography (PET), and mini-mental state examination (MMSE) scores into a unified latent feature space, aligning heterogeneous multimodal features into a consistent distribution. This alleviates inconsistent interference during multimodal fusion. To address reliability differences, we estimate the belief and uncertainty inherent in the class prediction probabilities of each modality using a Dirichlet distribution-based mechanism, providing interpretability for subsequent classification decisions. Furthermore, we design a novel fusion strategy that integrates the belief and uncertainty from different modalities, enhancing the reliability of the final classification results. Evaluated on the ADNI and AIBL datasets across four AD-related classification tasks, our method achieves promising performance, demonstrating its effectiveness in addressing multimodal heterogeneity and reliability for robust AD classification.