Multimodal Deep Learning for the Diagnosis and Assessment of Alzheimer's Disease

Sponsor
First Hospital of China Medical University (Other)
Overall Status
Not yet recruiting
CT.gov ID
NCT06081569
Collaborator
(none)
300
36

Study Details

Study Description

Brief Summary

Alzheimer's disease (AD) is the most common dementia and has been one of the most expensive diseases with the highest lethality. With the rapid increase of the aging population, more and more burdens will be posed on society and economics. The manifestations of AD are the progressive loss of memory, language and visuospatial function, executive and daily living abilities, and so forth. The Pathophysiological changes of AD occur 10-20 years before the clinical symptoms, while there is still a lack of effective strategy for early diagnosis. Mild cognitive impairment (MCI) is considered to be a transitional state between healthy aging and the clinical diagnosis of dementia and has received increasing attention as a separate diagnostic entity.

To make the diagnosis, doctors ought to compressively consider the multimodal medical information including clinical symptoms, neuroimages, neuropsychological tests, laboratory examinations, etc. Multimodal deep learning has risen to this challenge, which could integrate the various modalities of biological information and capture the relationships among them contributing to higher accuracy and efficiency. It has been widely applied in imaging, tumor pathology, genomics, etc. Recently, the studies on AD based on deep learning still mainly focused on multimodal neuroimaging, while multimodal medical information requires comprehensive integration and intellectual analysis. Moreover, studies reveal that some imperceptible symptoms in MCI and the early stage of AD may also play an effective role in diagnosis and assessment, such as gait disorder, facial expression identification dysfunction, and speech and language impairment. However, doctors could hardly detect the slight and complex changes, which could rely on the full mining of the video and audio information by multimodal deep learning.

In conclusion, we aim to explore the features of gait disorder, facial expression identification dysfunction, and speech and language impairment in MCI and AD, and analyze their diagnostic efficiency. We would identify the different degrees of dependency on multimodal medical information in diagnosis and finally build an optimal multimodal diagnostic method utilizing the most convenient and economical information. Besides, based on follow-up observations on the changes in multimodal medical information with the progress of AD and MCI, we expect to establish an effective and convenient diagnostic strategy.

Condition or Disease Intervention/Treatment Phase
  • Diagnostic Test: gait video; speech video; facial expression video;

Detailed Description

Our objective is to make the early diagnosis and assessment of AD and MCI based on multimodal deep learning. Initially, Gait disorder, facial expression identification dysfunction, and speech and language impairment are of great significance in the occurrence and development of AD and MCI. However, due to the high complexity and stealthiness of these clinical symptoms, no uniform conclusions have been made. Hence, we attempt to apply machine learning methods to recognize the video and audio information. In this way, we will explore the changing characteristics of gait, expression, and language in AD and MCI, and analyze their diagnostic effectiveness as diagnostic markers, finally providing new ideas and experimental data for the diagnosis of AD and MCI. Secondly, multimodal medical information needs to be integrated and comprehensively analyzed. We aim to propose an optimal diagnostic strategy referring to the different degrees of dependency on multimodal medical information in diagnosis. Moreover, observing the changes in multimodal medical information with the progress of AD and MCI, we expect to build a predicting model of AD diagnosis and prognosis.

The methods are as follows:
  1. Collecting multimodal medical information A variety of multimodal medical information would be carefully collected including the baseline demographic data, chief complaint and medical history, peripheral organ function assessment, laboratory examination, imaging examination, neuroelectrophysiological examination, neurocognitive and psychological examination, information on gait, expression, and language, and biological samples, etc.

  2. Revealing the changes of gait, expression, and language in patients with AD and MCI, and verifying their diagnostic efficacy.

For multimodal medical information on gait, OpenPose model was used to extract human key points and construct a human skeleton structure diagram. Based on graph neural networks and convolutional neural networks, instantaneous action analysis of single-frame images is carried out. And then utilizing the Transformer model, gait sequence analysis is carried out by integrating multi-frame video.For multimodal medical information on facial expression, the Dlib algorithm will be used to extract facial key points, combined with facial expression images, and the spatiotemporal Transformer model will be used for facial expression analysis. For multimodal medical information on language, ASRT model will be used for speech recognition and text content extraction. Simultaneously, the frequency domain Fourier transform and wavelet transform will be applied to extract frequency domain information and analyze the speech features by integrating language content, voice intonation, speech speed, and other information. Based on the attention model, the gait, expression, and language analysis results of AD and MCI will be compared with those of the control group to reveal the features of AD and MCI and provide evidence for disease diagnosis.

  1. Analyzing the different degrees of dependency on multimodal information in the diagnosis of AD and MCI diseases, and establishing an optimal diagnosis strategy In the supervised learning process, the attention mechanism-based method will be used to analyze the influence of multimodal information on the final results. At the same time, based on the knowledge map, the patient's blood biochemical indicators, genomic information and other fields of knowledge would be added to the model. Based on Bayesian probability inference and causal inference theory, the causal programming method will be used to model the causal analysis of information and diagnosis results of different modes. Based on AutoML method, multimodal information will be combined and optimized, and a reliable optimal diagnosis strategy will be established according to experimental results.

  2. Exploring the changes of multimodal medical information with the progression of the disease, and build a predicting model for early diagnosis and disease progression of AD.

Viewing multimodal medical information as the control condition, the Transformer model will be used to model time sequence information, and the conditional diffusion model will be used to generate patients' MRI image changes and other disease progression-related information, providing the basis for disease progression prediction. Based on the large multimodal model technology, the output of the model will be interfered with and adjusted referring to the judgment and description of professional doctors, to generate the prediction in line with the judgment of professional doctors, and finally construct the interpretable early diagnosis and disease progression prediction model.

Study Design

Study Type:
Observational
Anticipated Enrollment :
300 participants
Observational Model:
Cohort
Time Perspective:
Prospective
Official Title:
Multimodal Deep Learning for the Diagnosis and Assessment of Alzheimer's Disease
Anticipated Study Start Date :
Oct 15, 2023
Anticipated Primary Completion Date :
Oct 15, 2024
Anticipated Study Completion Date :
Oct 15, 2026

Arms and Interventions

Arm Intervention/Treatment
Alzheimer's disease

the diagnosis of AD is according to the recommendations from the National Institute on Aging-Alzheimer's Association workgroups on diagnostic guidelines for AD.

Diagnostic Test: gait video; speech video; facial expression video;
The videos of participants' gait, facial expression, and speech will be recorded and analyzed further. Other routine diagnostic tests will also be performed such as imaging of MRI, cognitive scales, etc.
Other Names:
  • Other routine diagnostic tests such as imaging, cognitive scales, etc.
  • Mild cognitive impairment

    the diagnosis of MCI refers to the criteria defined by Peterson in 2004.

    Diagnostic Test: gait video; speech video; facial expression video;
    The videos of participants' gait, facial expression, and speech will be recorded and analyzed further. Other routine diagnostic tests will also be performed such as imaging of MRI, cognitive scales, etc.
    Other Names:
  • Other routine diagnostic tests such as imaging, cognitive scales, etc.
  • Control

    participants who are age-matched with AD and MCI participants, without cognitive impairment.

    Diagnostic Test: gait video; speech video; facial expression video;
    The videos of participants' gait, facial expression, and speech will be recorded and analyzed further. Other routine diagnostic tests will also be performed such as imaging of MRI, cognitive scales, etc.
    Other Names:
  • Other routine diagnostic tests such as imaging, cognitive scales, etc.
  • Outcome Measures

    Primary Outcome Measures

    1. The diagnostic efficiency of multimodal deep learning diagnostic strategy [The outcome will be measured and analyzed once all the baseline multimodal medical information has been collected.]

      The diagnostic efficiency will be measured by the area under curve(AUC)of receiver operating characteristic(ROC)curve.

    Secondary Outcome Measures

    1. The prognostic efficiency of multimodal deep learning prognostic strategy [The outcome will be measured and analyzed once all two-year follow-up multimodal medical information has been collected.]

      The prognostic efficiency will be measured by the area under curve(AUC)of receiver operating characteristic(ROC)curve.

    Eligibility Criteria

    Criteria

    Ages Eligible for Study:
    50 Years to 85 Years
    Sexes Eligible for Study:
    All
    Accepts Healthy Volunteers:
    Yes
    Inclusion Criteria:
    1. . Participants' age is between 50 and 85 years old, male or female;

    2. . Participants graduated from primary school or above, with normal hearing, vision, and pronunciation, using Chinese as their mother tongue and Mandarin as their daily language;

    3. . The diagnosis of AD and MCI participants conform to the corresponding diagnostic criteria mentioned above;

    4. . The scores of MMSE are between 10 and 28, and the scores of CDR are no more than 2.

    5. . Patients or family members agree to sign informed consent.

    Exclusion Criteria:
    1. . Participants suffer from neurological disorders that could cause dysfunction of the brain, such as depression, tumors, Parkinson's disease, metabolic encephalopathy, encephalitis, multiple sclerosis, epilepsy, brain trauma, normal cranial pressure hydrocephalus, and so forth;

    2. . Participants suffer from systematic diseases that could cause cognitive impairment, such as liver insufficiency, renal insufficiency, thyroid dysfunction, severe anemia, folic acid or vitamin B12 deficiency, syphilis, HIV infection, alcohol and drug abuse, and so forth;

    3. . Participants suffer from diseases that are unable to cooperate with the examinations;

    4. . Participants cannot take magnetic resonance imaging;

    5. . Participants suffer from mental and neurodevelopmental retardation;

    6. . Participants refuse to sign informed consent.

    Contacts and Locations

    Locations

    No locations specified.

    Sponsors and Collaborators

    • First Hospital of China Medical University

    Investigators

    • Study Chair: Huayan Liu, the first affiliated hospital of China medical university, neurology department

    Study Documents (Full-Text)

    None provided.

    More Information

    Publications

    Responsible Party:
    Liu Huayan, Neurology department, First Hospital of China Medical University
    ClinicalTrials.gov Identifier:
    NCT06081569
    Other Study ID Numbers:
    • LHuayan
    First Posted:
    Oct 13, 2023
    Last Update Posted:
    Oct 13, 2023
    Last Verified:
    Oct 1, 2023
    Individual Participant Data (IPD) Sharing Statement:
    Yes
    Plan to Share IPD:
    Yes
    Studies a U.S. FDA-regulated Drug Product:
    No
    Studies a U.S. FDA-regulated Device Product:
    No
    Keywords provided by Liu Huayan, Neurology department, First Hospital of China Medical University
    Additional relevant MeSH terms:

    Study Results

    No Results Posted as of Oct 13, 2023