TY - JOUR
T1 - Detection of Alzheimer’s disease using ECD SPECT images by transfer learning from FDG PET
AU - For the Alzheimer’s Disease Neuroimaging Initiative
AU - Ni, Yu Ching
AU - Tseng, Fan Pin
AU - Pai, Ming Chyi
AU - Hsiao, Ing Tsung
AU - Lin, Kun Ju
AU - Lin, Zhi Kun
AU - Lin, Wen Bin
AU - Chiu, Pai Yi
AU - Hung, Guang Uei
AU - Chang, Chiung Chih
AU - Chang, Ya Ting
AU - Chuang, Keh‑Shih ‑S
N1 - Funding Information:
The FDG PET Data collection and sharing for this project was funded by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12–2-0012). The funding details of ADNI can be found at: http://adni.loni.usc.edu/about/funding/ .
Funding Information:
We are grateful for funding from the Ministry of Science and Technology (MOST) Taiwan (grants: MOST 107-3111-Y-042A-097, MOST 108-3111-Y-042A-117, 110-1401-01-22-01) and Chang Gung Memorial Hospital (grant: CORPG3J0342, CMRPG3J0371, CMRPG3J0361, CMRPG3J0372).
Publisher Copyright:
© 2021, The Japanese Society of Nuclear Medicine.
PY - 2021/8
Y1 - 2021/8
N2 - Objective: To develop a practical method to rapidly utilize a deep learning model to automatically extract image features based on a small number of SPECT brain perfusion images in general clinics to objectively evaluate Alzheimer's disease (AD). Methods: For the properties of low cost and convenient access in general clinics, Tc-99-ECD SPECT imaging data in brain perfusion detection was used in this study for AD detection. Two-stage transfer learning based on the Inception v3 network model was performed using the ImageNet dataset and ADNI database. To improve training accuracy, the three-dimensional image was reorganized into three sets of two-dimensional images for data augmentation and ensemble learning. The effect of pre-training parameters for Tc-99m-ECD SPECT image to distinguish AD from normal cognition (NC) was investigated, as well as the effect of the sample size of F-18-FDG PET images used in pre-training. The same model was also fine-tuned for the prediction of the MMSE score from the Tc-99m-ECD SPECT image. Results: The AUC values of w/wo pre-training parameters for Tc-99m-ECD SPECT image to distinguish AD from NC were 0.86 and 0.90. The sensitivity, specificity, precision, accuracy, and F1 score were 100%, 75%, 76%, 86%, and 86%, respectively for the training model with 1000 cases of F-18-FDG PET image for pre-training. The AUC values for various sample sizes of the training dataset (100, 200, 400, 800, 1000 cases) for pre-training were 0.86, 0.91, 0.95, 0.97, and 0.97. Regardless of the pre-training condition ECD dataset used, the AUC value was greater than 0.85. Finally, predicting cognitive scores and MMSE scores correlated (R2 = 0.7072). Conclusions: With the ADNI pre-trained model, the sensitivity and accuracy of the proposed deep learning model using SPECT ECD perfusion images to differentiate AD from NC were increased by approximately 30% and 10%, respectively. Our study indicated that the model trained on PET FDG metabolic imaging for the same disease could be transferred to a small sample of SPECT cerebral perfusion images. This model will contribute to the practicality of SPECT cerebral perfusion images using deep learning technology to objectively recognize AD.
AB - Objective: To develop a practical method to rapidly utilize a deep learning model to automatically extract image features based on a small number of SPECT brain perfusion images in general clinics to objectively evaluate Alzheimer's disease (AD). Methods: For the properties of low cost and convenient access in general clinics, Tc-99-ECD SPECT imaging data in brain perfusion detection was used in this study for AD detection. Two-stage transfer learning based on the Inception v3 network model was performed using the ImageNet dataset and ADNI database. To improve training accuracy, the three-dimensional image was reorganized into three sets of two-dimensional images for data augmentation and ensemble learning. The effect of pre-training parameters for Tc-99m-ECD SPECT image to distinguish AD from normal cognition (NC) was investigated, as well as the effect of the sample size of F-18-FDG PET images used in pre-training. The same model was also fine-tuned for the prediction of the MMSE score from the Tc-99m-ECD SPECT image. Results: The AUC values of w/wo pre-training parameters for Tc-99m-ECD SPECT image to distinguish AD from NC were 0.86 and 0.90. The sensitivity, specificity, precision, accuracy, and F1 score were 100%, 75%, 76%, 86%, and 86%, respectively for the training model with 1000 cases of F-18-FDG PET image for pre-training. The AUC values for various sample sizes of the training dataset (100, 200, 400, 800, 1000 cases) for pre-training were 0.86, 0.91, 0.95, 0.97, and 0.97. Regardless of the pre-training condition ECD dataset used, the AUC value was greater than 0.85. Finally, predicting cognitive scores and MMSE scores correlated (R2 = 0.7072). Conclusions: With the ADNI pre-trained model, the sensitivity and accuracy of the proposed deep learning model using SPECT ECD perfusion images to differentiate AD from NC were increased by approximately 30% and 10%, respectively. Our study indicated that the model trained on PET FDG metabolic imaging for the same disease could be transferred to a small sample of SPECT cerebral perfusion images. This model will contribute to the practicality of SPECT cerebral perfusion images using deep learning technology to objectively recognize AD.
UR - http://www.scopus.com/inward/record.url?scp=85107425112&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85107425112&partnerID=8YFLogxK
U2 - 10.1007/s12149-021-01626-3
DO - 10.1007/s12149-021-01626-3
M3 - Article
C2 - 34076857
AN - SCOPUS:85107425112
SN - 0914-7187
VL - 35
SP - 889
EP - 899
JO - Annals of Nuclear Medicine
JF - Annals of Nuclear Medicine
IS - 8
ER -