PDF(1049 KB)
Generative AI and Multimodal Neuroimaging: A Review of Research Progress in Auxiliary Diagnosis of Alzheimer's Disease
Chi ZHANG, Yifei TANG, Xudong LI, Shuqiang WANG
Chinese Journal of Alzheimer's Disease and Related Disorders ›› 2025, Vol. 8 ›› Issue (6) : 363-370.
PDF(1049 KB)
Abbreviation (ISO4): Chinese Journal of Alzheimer's Disease and Related Disorders
Editor in chief: Jun WANG
PDF(1049 KB)
Generative AI and Multimodal Neuroimaging: A Review of Research Progress in Auxiliary Diagnosis of Alzheimer's Disease
Accurate early diagnosis of Alzheimer’s disease (AD) represents a major challenge against the backdrop of global aging. This paper reviews research advances combining generative Artificial Intelligence (AI) with multimodal neuroimaging (including MRI and PET) to achieve early and accurate diagnosis of AD. This review systematically analyzes the applications of generative models in AD diagnosis across key areas: brain image data augmentation, pathological feature representation learning, and brain network modeling. We provide a detailed analysis of how these technologies effectively overcome critical challenges such as neuroimaging data scarcity and class imbalance. These technologies enhance the models’ classification performance on AD datasets and its ability to predict the progression of AD. Specifically, in the analysis of functional and structural brain networks, generative AI offers a new paradigm for understanding AD pathological mechanisms and enabling early prediction by constructing high-fidelity networks. Furthermore, this paper discusses the clinical translation prospects of these technologies in personalized prognosis and treatment monitoring, as well as the technical and ethical challenges in implementation. This review provides a comprehensive framework to understand the potential and development trends of generative AI, multimodal neuroimaging, and their derived functional brain network features in the auxiliary diagnosis of early AD.
generative AI / Multimodal neuroimaging / Brain network / Alzheimer’s disease / Data augmentation / Cross-modal reconstruction
| [1] |
|
| [2] |
|
| [3] |
Neuropathologists assess vast brain areas to identify diverse and subtly-differentiated morphologies. Standard semi-quantitative scoring approaches, however, are coarse-grained and lack precise neuroanatomic localization. We report a proof-of-concept deep learning pipeline that identifies specific neuropathologies—amyloid plaques and cerebral amyloid angiopathy—in immunohistochemically-stained archival slides. Using automated segmentation of stained objects and a cloud-based interface, we annotate > 70,000 plaque candidates from 43 whole slide images (WSIs) to train and evaluate convolutional neural networks. Networks achieve strong plaque classification on a 10-WSI hold-out set (0.993 and 0.743 areas under the receiver operating characteristic and precision recall curve, respectively). Prediction confidence maps visualize morphology distributions at high resolution. Resulting network-derived amyloid beta (Aβ)-burden scores correlate well with established semi-quantitative scores on a 30-WSI blinded hold-out. Finally, saliency mapping demonstrates that networks learn patterns agreeing with accepted pathologic features. This scalable means to augment a neuropathologist’s ability suggests a route to neuropathologic deep phenotyping.
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
Effective and accurate diagnosis of Alzheimer's disease (AD), as well as its prodromal stage (i.e., mild cognitive impairment (MCI)), has attracted more and more attention recently. So far, multiple biomarkers have been shown to be sensitive to the diagnosis of AD and MCI, i.e., structural MR imaging (MRI) for brain atrophy measurement, functional imaging (e.g., FDG-PET) for hypometabolism quantification, and cerebrospinal fluid (CSF) for quantification of specific proteins. However, most existing research focuses on only a single modality of biomarkers for diagnosis of AD and MCI, although recent studies have shown that different biomarkers may provide complementary information for the diagnosis of AD and MCI. In this paper, we propose to combine three modalities of biomarkers, i.e., MRI, FDG-PET, and CSF biomarkers, to discriminate between AD (or MCI) and healthy controls, using a kernel combination method. Specifically, ADNI baseline MRI, FDG-PET, and CSF data from 51AD patients, 99 MCI patients (including 43 MCI converters who had converted to AD within 18 months and 56 MCI non-converters who had not converted to AD within 18 months), and 52 healthy controls are used for development and validation of our proposed multimodal classification method. In particular, for each MR or FDG-PET image, 93 volumetric features are extracted from the 93 regions of interest (ROIs), automatically labeled by an atlas warping algorithm. For CSF biomarkers, their original values are directly used as features. Then, a linear support vector machine (SVM) is adopted to evaluate the classification accuracy, using a 10-fold cross-validation. As a result, for classifying AD from healthy controls, we achieve a classification accuracy of 93.2% (with a sensitivity of 93% and a specificity of 93.3%) when combining all three modalities of biomarkers, and only 86.5% when using even the best individual modality of biomarkers. Similarly, for classifying MCI from healthy controls, we achieve a classification accuracy of 76.4% (with a sensitivity of 81.8% and a specificity of 66%) for our combined method, and only 72% even using the best individual modality of biomarkers. Further analysis on MCI sensitivity of our combined method indicates that 91.5% of MCI converters and 73.4% of MCI non-converters are correctly classified. Moreover, we also evaluate the classification performance when employing a feature selection method to select the most discriminative MR and FDG-PET features. Again, our combined method shows considerably better performance, compared to the case of using an individual modality of biomarkers.Copyright © 2011 Elsevier Inc. All rights reserved.
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
|
| [29] |
|
| [30] |
|
| [31] |
Florbetapir F (AV45), a highly sensitive and specific positron emission tomographic (PET) molecular biomarker binding to the amyloid-β of Alzheimer's disease (AD), is constrained by radiation and cost. We sought to combat it by combining multimodal magnetic resonance imaging (MRI) images and a collaborative generative adversarial networks model (CollaGAN) to develop a multimodal MRI-derived Amyloid-β (MRAβ) biomarker. We collected multimodal MRI and PET AV45 data of 380 qualified participants from the ADNI dataset and 64 subjects from OASIS3 dataset. A five-fold cross-validation CollaGAN were applied to generate MRAβ. In the ADNI dataset, we found MRAβ could characterize the subject-level AV45 spatial variations in both AD and mild cognitive impairment (MCI). Voxel-wise two-sample t-tests demonstrated amyloid-β depositions identified by MRAβ in AD and MCI were significantly higher than healthy controls (HCs) in widespread cortices (p < .05, corrected) and were much similar to those by AV45 (r > .92, p < .001). Moreover, a 3D ResNet classifier demonstrated that MRAβ was comparable to AV45 in discriminating AD from HC in both the ADNI and OASIS3 datasets, and in discriminate MCI from HC in ADNI. Finally, we found MRAβ could mimic cortical hyper-AV45 in HCs who later converted to MCI (r = .79, p < .001) and was comparable to AV45 in discriminating them from stable HC (p > .05). In summary, our work illustrates that MRAβ synthesized by multimodal MRI could mimic the cerebral amyloid-β depositions like AV45 and lends credence to the feasibility of advancing MRI toward molecular-explainable biomarkers.© 2023 The Authors. Human Brain Mapping published by Wiley Periodicals LLC.
|
| [32] |
|
| [33] |
|
| [34] |
|
| [35] |
|
| [36] |
|
| [37] |
Integrating the brain structural and functional connectivity features is of great significance in both exploring brain science and analyzing cognitive impairment clinically. However, it remains a challenge to effectively fuse structural and functional features in exploring the complex brain network. In this paper, a novel brain structure-function fusing-representation learning (BSFL) model is proposed to effectively learn fused representation from diffusion tensor imaging (DTI) and resting-state functional magnetic resonance imaging (fMRI) for mild cognitive impairment (MCI) analysis. Specifically, the decomposition-fusion framework is developed to first decompose the feature space into the union of the uniform and unique spaces for each modality, and then adaptively fuse the decomposed features to learn MCI-related representation. Moreover, a knowledge-aware transformer module is designed to automatically capture local and global connectivity features throughout the brain. Also, a uniform-unique contrastive loss is further devised to make the decomposition more effective and enhance the complementarity of structural and functional features. The extensive experiments demonstrate that the proposed model achieves better performance than other competitive methods in predicting and analyzing MCI. More importantly, the proposed model could be a potential tool for reconstructing unified brain networks and predicting abnormal connections during the degenerative processes in MCI.
|
| [38] |
|
| [39] |
|
| [40] |
|
| [41] |
|
| [42] |
|
| [43] |
|
| [44] |
|
| [45] |
|
/
| 〈 |
|
〉 |