Generative AI and Multimodal Neuroimaging: A Review of Research Progress in Auxiliary Diagnosis of Alzheimer's Disease

Chi ZHANG, Yifei TANG, Xudong LI, Shuqiang WANG

Chinese Journal of Alzheimer's Disease and Related Disorders ›› 2025, Vol. 8 ›› Issue (6) : 363-370.

PDF(1049 KB)
Home Journals Chinese Journal of Alzheimer's Disease and Related Disorders
Chinese Journal of Alzheimer's Disease and Related Disorders

Abbreviation (ISO4): Chinese Journal of Alzheimer's Disease and Related Disorders      Editor in chief: Jun WANG

About  /  Aim & scope  /  Editorial board  /  Indexed  /  Contact  / 
PDF(1049 KB)
Chinese Journal of Alzheimer's Disease and Related Disorders ›› 2025, Vol. 8 ›› Issue (6) : 363-370. DOI: 10.3969/j.issn.2096-5516.2025.06.001
Commentary

Generative AI and Multimodal Neuroimaging: A Review of Research Progress in Auxiliary Diagnosis of Alzheimer's Disease

Author information +
History +

Abstract

Accurate early diagnosis of Alzheimer’s disease (AD) represents a major challenge against the backdrop of global aging. This paper reviews research advances combining generative Artificial Intelligence (AI) with multimodal neuroimaging (including MRI and PET) to achieve early and accurate diagnosis of AD. This review systematically analyzes the applications of generative models in AD diagnosis across key areas: brain image data augmentation, pathological feature representation learning, and brain network modeling. We provide a detailed analysis of how these technologies effectively overcome critical challenges such as neuroimaging data scarcity and class imbalance. These technologies enhance the models’ classification performance on AD datasets and its ability to predict the progression of AD. Specifically, in the analysis of functional and structural brain networks, generative AI offers a new paradigm for understanding AD pathological mechanisms and enabling early prediction by constructing high-fidelity networks. Furthermore, this paper discusses the clinical translation prospects of these technologies in personalized prognosis and treatment monitoring, as well as the technical and ethical challenges in implementation. This review provides a comprehensive framework to understand the potential and development trends of generative AI, multimodal neuroimaging, and their derived functional brain network features in the auxiliary diagnosis of early AD.

Key words

generative AI / Multimodal neuroimaging / Brain network / Alzheimer’s disease / Data augmentation / Cross-modal reconstruction

Cite this article

Download Citations
Chi ZHANG , Yifei TANG , Xudong LI , et al. Generative AI and Multimodal Neuroimaging: A Review of Research Progress in Auxiliary Diagnosis of Alzheimer's Disease[J]. Chinese Journal of Alzheimer's Disease and Related Disorders. 2025, 8(6): 363-370 https://doi.org/10.3969/j.issn.2096-5516.2025.06.001

References

[1]
Gustavsson A, Norton N, Fast T, et al. Global estimates on the number of persons across the Alzheimer's disease continuum[J]. Alzheimer's & Dementia, 2023, 19(2): 658-670.
[2]
Nandi A, Counts N, Chen S, et al. Global and regional projections of the economic burden of Alzheimer's disease and related dementias from 2019 to 2050: a value of statistical life approach[J]. eClinicalMedicine, 2022,51.
[3]
Tang Z, Chuang KV, Decarli C, et al. Interpretable classification of Alzheimer’s disease pathologies with a convolutional neural network pipeline[J]. Nature Communications, 2019, 10(1): 2173.
Neuropathologists assess vast brain areas to identify diverse and subtly-differentiated morphologies. Standard semi-quantitative scoring approaches, however, are coarse-grained and lack precise neuroanatomic localization. We report a proof-of-concept deep learning pipeline that identifies specific neuropathologies—amyloid plaques and cerebral amyloid angiopathy—in immunohistochemically-stained archival slides. Using automated segmentation of stained objects and a cloud-based interface, we annotate > 70,000 plaque candidates from 43 whole slide images (WSIs) to train and evaluate convolutional neural networks. Networks achieve strong plaque classification on a 10-WSI hold-out set (0.993 and 0.743 areas under the receiver operating characteristic and precision recall curve, respectively). Prediction confidence maps visualize morphology distributions at high resolution. Resulting network-derived amyloid beta (Aβ)-burden scores correlate well with established semi-quantitative scores on a 30-WSI blinded hold-out. Finally, saliency mapping demonstrates that networks learn patterns agreeing with accepted pathologic features. This scalable means to augment a neuropathologist’s ability suggests a route to neuropathologic deep phenotyping.
[4]
Hong X, Huang L, Lei F, et al. The Role and Pathogenesis of Tau Protein in Alzheimer’s Disease[J]. Biomolecules, 2025, 15(6): 824.
[5]
Xi Y, Wang Q, Wu C, et al. Predicting conversion of Alzheimer’s disease based on multi-modal fusion of neuroimaging and genetic data[J]. Complex & Intelligent Systems, 2025, 11(1): 58.
[6]
Hassan A, Imran A, Yasin AU, et al. A Multimodal Approach for Alzheimer's Disease Detection and Classification Using Deep Learning[J]. Journal of Computing & Biomedical Informatics, 2024, 6(02): 441-450.
[7]
Lu D, Popuri K, Ding GW, et al. Multimodal and multiscale deep neural networks for the early diagnosis of Alzheimer’s disease using structural MR and FDG-PET images[J]. Scientific reports, 2018, 8(1): 5697.
[8]
Venugopalan J, Tong L, Hassanzadeh HR, et al. Multimodal deep learning models for early detection of Alzheimer’s disease stage[J]. Scientific reports, 2021, 11(1): 3254.
[9]
Gong C, Jing C, Chen X, et al. Generative AI for brain image computing and brain network computing: a review[J]. Frontiers in Neuroscience, 2023, 17: 1203104.
[10]
Sadegh-Zadeh S, Fakhri E, Bahrami M, et al. An approach toward artificial intelligence Alzheimer’s disease diagnosis using brain signals[J]. diagnostics, 2023, 13(3): 477.
[11]
Logothetis NK. What we can do and what we cannot do with fMRI[J]. Nature, 2008, 453(7197): 869-878.
[12]
Talwar P, Kushwaha S, Chaturvedi M, et al. Systematic Review of Different Neuroimaging Correlates in Mild Cognitive Impairment and Alzheimer’s Disease[J]. Clinical Neuroradiology, 2021, 31(4): 953-967.
[13]
Rabinovici GD, Knopman DS, Arbizu J, et al. Updated appropriate use criteria for amyloid and tau PET: a report from the Alzheimer’s Association and Society for Nuclear Medicine and Molecular Imaging Workgroup[J]. Journal of Nuclear Medicine, 2025, 66(Supplement 2): S5-S31.
[14]
Chen S, Lu J, Li H, et al. Staging tau pathology with tau PET in Alzheimer’s disease: a longitudinal study[J]. Translational Psychiatry, 2021, 11(1): 483.
[15]
Zhang D, Wang Y, Zhou L, et al. Multimodal classification of Alzheimer's disease and mild cognitive impairment[J]. Neuroimage, 2011, 55(3): 856-867.
Effective and accurate diagnosis of Alzheimer's disease (AD), as well as its prodromal stage (i.e., mild cognitive impairment (MCI)), has attracted more and more attention recently. So far, multiple biomarkers have been shown to be sensitive to the diagnosis of AD and MCI, i.e., structural MR imaging (MRI) for brain atrophy measurement, functional imaging (e.g., FDG-PET) for hypometabolism quantification, and cerebrospinal fluid (CSF) for quantification of specific proteins. However, most existing research focuses on only a single modality of biomarkers for diagnosis of AD and MCI, although recent studies have shown that different biomarkers may provide complementary information for the diagnosis of AD and MCI. In this paper, we propose to combine three modalities of biomarkers, i.e., MRI, FDG-PET, and CSF biomarkers, to discriminate between AD (or MCI) and healthy controls, using a kernel combination method. Specifically, ADNI baseline MRI, FDG-PET, and CSF data from 51AD patients, 99 MCI patients (including 43 MCI converters who had converted to AD within 18 months and 56 MCI non-converters who had not converted to AD within 18 months), and 52 healthy controls are used for development and validation of our proposed multimodal classification method. In particular, for each MR or FDG-PET image, 93 volumetric features are extracted from the 93 regions of interest (ROIs), automatically labeled by an atlas warping algorithm. For CSF biomarkers, their original values are directly used as features. Then, a linear support vector machine (SVM) is adopted to evaluate the classification accuracy, using a 10-fold cross-validation. As a result, for classifying AD from healthy controls, we achieve a classification accuracy of 93.2% (with a sensitivity of 93% and a specificity of 93.3%) when combining all three modalities of biomarkers, and only 86.5% when using even the best individual modality of biomarkers. Similarly, for classifying MCI from healthy controls, we achieve a classification accuracy of 76.4% (with a sensitivity of 81.8% and a specificity of 66%) for our combined method, and only 72% even using the best individual modality of biomarkers. Further analysis on MCI sensitivity of our combined method indicates that 91.5% of MCI converters and 73.4% of MCI non-converters are correctly classified. Moreover, we also evaluate the classification performance when employing a feature selection method to select the most discriminative MR and FDG-PET features. Again, our combined method shows considerably better performance, compared to the case of using an individual modality of biomarkers.Copyright © 2011 Elsevier Inc. All rights reserved.
[16]
Risacher SL, Saykin AJ. Neuroimaging and other biomarkers for Alzheimer's disease: the changing landscape of early detection[J]. Annual review of clinical psychology, 2013, 9(1): 621-648.
[17]
Goodfellow IJ, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[J]. Advances in neural information processing systems, 2014,27.
[18]
Yi X, Walia E, Babyn P. Generative adversarial network in medical imaging: A review[J]. Medical image analysis, 2019, 58: 101552.
[19]
Awang MK, Ali G, Faheem M. Recent Advancements in Neuroimaging-Based Alzheimer's Disease Prediction Using Deep Learning Approaches in e-Health: A Systematic Review[J]. Health Science Reports, 2025, 8(5): e70802.
[20]
Han L. AD-Diff: enhancing Alzheimer's disease prediction accuracy through multimodal fusion[J]. Frontiers in Computational Neuroscience, 2025, 19: 1484540.
[21]
Bowles C. Medical image synthesis in the diagnosis and study of neurodegenerative diseases[D]. Imperial College London, UK, 2018.
[22]
Abueid AI, Elhossiny MA, Elghazawy MAI, et al. Generative AI in neuroscience imaging: A review[J]. 2025.
[23]
Zheng X, Worhunsky P, Liu Q, et al. Generating synthetic brain PET images of synaptic density based on MR T1 images using deep learning[J]. EJNMMI physics, 2025, 12(1): 30.
[24]
Dhinagar NJ, Thomopoulos SI, Thompson PM. Generative AI improves MRI-based Detection of Alzheimer’s Disease by using Latent Diffusion Models and Convolutional Neural Networks[J]. Alzheimer's & Dementia, 2024, 20: e089958.
[25]
Shorten C, Khoshgoftaar TM. A survey on image data augmentation for deep learning[J]. Journal of big data, 2019, 6(1): 1-48.
[26]
Jung E, Luna M, Park SH. Conditional GAN with 3D discriminator for MRI generation of Alzheimer’s disease progression[J]. Pattern Recognition, 2023, 133: 109061.
[27]
Yuda E, Ando T, Kaneko I, et al. Comprehensive data augmentation approach using WGAN-GP and UMAP for enhancing Alzheimer’s disease diagnosis[J]. Electronics, 2024, 13(18): 3671.
[28]
Yao W, Shen Y, Nicolls F, et al. Conditional diffusion model-based data augmentation for alzheimer’s prediction[C]//. International Conference on Neural Computing for Advanced Applications: Springer, 2023: 33-46.
[29]
Hu S, Yu W, Chen Z, et al. Medical image reconstruction using generative adversarial network for Alzheimer disease assessment with class-imbalance problem[C]//. 2020 IEEE 6th international conference on computer and communications (ICCC):IEEE, 2020: 1323-1327.
[30]
Chen K, Weng Y, Huang Y, et al. A multi-view learning approach with diffusion model to synthesize FDG PET from MRI T1WI for diagnosis of Alzheimer's disease[J]. Alzheimer's & Dementia, 2025, 21(2): e14421.
[31]
Zhang Y, Li X, Ji Y, et al. MRAβ: A multimodal MRI-derived amyloid-β biomarker for Alzheimer's disease[J]. Human Brain Mapping, 2023, 44(15): 5139-5152.
Florbetapir F (AV45), a highly sensitive and specific positron emission tomographic (PET) molecular biomarker binding to the amyloid-β of Alzheimer's disease (AD), is constrained by radiation and cost. We sought to combat it by combining multimodal magnetic resonance imaging (MRI) images and a collaborative generative adversarial networks model (CollaGAN) to develop a multimodal MRI-derived Amyloid-β (MRAβ) biomarker. We collected multimodal MRI and PET AV45 data of 380 qualified participants from the ADNI dataset and 64 subjects from OASIS3 dataset. A five-fold cross-validation CollaGAN were applied to generate MRAβ. In the ADNI dataset, we found MRAβ could characterize the subject-level AV45 spatial variations in both AD and mild cognitive impairment (MCI). Voxel-wise two-sample t-tests demonstrated amyloid-β depositions identified by MRAβ in AD and MCI were significantly higher than healthy controls (HCs) in widespread cortices (p < .05, corrected) and were much similar to those by AV45 (r > .92, p < .001). Moreover, a 3D ResNet classifier demonstrated that MRAβ was comparable to AV45 in discriminating AD from HC in both the ADNI and OASIS3 datasets, and in discriminate MCI from HC in ADNI. Finally, we found MRAβ could mimic cortical hyper-AV45 in HCs who later converted to MCI (r = .79, p < .001) and was comparable to AV45 in discriminating them from stable HC (p > .05). In summary, our work illustrates that MRAβ synthesized by multimodal MRI could mimic the cerebral amyloid-β depositions like AV45 and lends credence to the feasibility of advancing MRI toward molecular-explainable biomarkers.© 2023 The Authors. Human Brain Mapping published by Wiley Periodicals LLC.
[32]
Hu S, Lei B, Wang S, et al. Bidirectional mapping generative adversarial networks for brain MR to PET synthesis[J]. IEEE Transactions on Medical Imaging, 2021, 41(1): 145-157.
[33]
Li C, Wei Y, Chen X, et al. BrainNetGAN: Data augmentation of brain connectivity using generative adversarial network for dementia classification[C]//. MICCAI Workshop on Deep Generative Models: Springer, 2021: 103-111.
[34]
Chintapalli SS, Wang R, Yang Z, et al. Generative models of MRI-derived neuroimaging features and associated dataset of 18,000 samples[J]. Scientific Data, 2024, 11(1): 1330.
[35]
Yu W, Lei B, Wang S, et al. Morphological feature visualization of Alzheimer’s disease via multidirectional perception GAN[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 34(8): 4401-4415.
[36]
Dolci G, Cruciani F, Rahaman MA, et al. An interpretable generative multimodal neuroimaging-genomics framework for decoding Alzheimer’s disease[J]. Journal of Neural Engineering, 2024.
[37]
Zuo Q, Zhong N, Pan Y, et al. Brain structure-function fusing representation learning using adversarial decomposed-VAE for analyzing MCI[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023, 31: 4017-4028.
Integrating the brain structural and functional connectivity features is of great significance in both exploring brain science and analyzing cognitive impairment clinically. However, it remains a challenge to effectively fuse structural and functional features in exploring the complex brain network. In this paper, a novel brain structure-function fusing-representation learning (BSFL) model is proposed to effectively learn fused representation from diffusion tensor imaging (DTI) and resting-state functional magnetic resonance imaging (fMRI) for mild cognitive impairment (MCI) analysis. Specifically, the decomposition-fusion framework is developed to first decompose the feature space into the union of the uniform and unique spaces for each modality, and then adaptively fuse the decomposed features to learn MCI-related representation. Moreover, a knowledge-aware transformer module is designed to automatically capture local and global connectivity features throughout the brain. Also, a uniform-unique contrastive loss is further devised to make the decomposition more effective and enhance the complementarity of structural and functional features. The extensive experiments demonstrate that the proposed model achieves better performance than other competitive methods in predicting and analyzing MCI. More importantly, the proposed model could be a potential tool for reconstructing unified brain networks and predicting abnormal connections during the degenerative processes in MCI.
[38]
Kumar S, Payne PR, Sotiras A. Normative modeling using multimodal variational autoencoders to identify abnormal brain volume deviations in Alzheimer’s disease[C]//. Proceedings of SPIE--the International Society for Optical Engineering, 2023: 1246503.
[39]
Lynn CW, Bassett DS. The physics of brain network structure, function and control[J]. Nature Reviews Physics, 2019, 1(5): 318-332.
[40]
Zhou T, Ding C, Jing C, et al. BG-GAN: Generative AI Enable Representing Brain Structure-Function Connections for Alzheimer’s Disease[J]. IEEE Transactions on Consumer Electronics, 2025.
[41]
Zuo Q, Chen L, Shen Y, et al. BDHT: generative AI enables causality analysis for mild cognitive impairment[J]. IEEE Transactions on Automation Science and Engineering, 2024.
[42]
Zong Y, Jing C, Chan JH, et al. Brainnetdiff: Generative ai empowers brain network construction via multimodal diffusion[C]//. 2024 IEEE International Symposium on Biomedical Imaging (ISBI): IEEE, 2024: 1-5.
[43]
Jiang H, Chen X, Jin C, et al. Structural brain network generation via brain denoising diffusion probabilistic model[C]// . International Conference on AI in Healthcare: Springer, 2024: 264-277.
[44]
Zong Y, Zuo Q, Ng MK, et al. A new brain network construction paradigm for brain disorder via diffusion-based graph contrastive learning[J]. IEEE transactions on pattern analysis and machine intelligence, 2024.
[45]
Teoh JR, Dong J, Zuo X, et al. Advancing healthcare through multimodal data fusion: a comprehensive review of techniques and applications[J]. PeerJ Computer Science, 2024, 10: e2298.
PDF(1049 KB)

Accesses

Citation

Detail

Sections
Recommended

/