TY - GEN
T1 - AI-Assisted Stanford Classification of Aortic Dissection in CT Imaging Using Volumetric 3D CNN with External Guided Attention
AU - Liou, Cheng Fu
AU - Huang, Li Ting
AU - Kuo, Paul
AU - Wang, Chien Kuo
AU - Guo, Jiun In
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - This paper reports an innovative approach to the classification of Stanford Type A and Type B aortic dissection using 3D CNN in conjunction with a novel Guided Attention (GA) mechanism. Recently, Computerized Tomography (CT) scan is increasingly applied for diagnoses of aortic dissection, and AI-assisted technology has been proven effective in increasing the productivity of radiologists. However, the general 3D CNN method even takes advantage of spatial continuity is not able to focus on the torn region of the aorta. In contrast, we propose an innovative approach termed the 'External Guided Attention' (EGA), which is capable of focusing on both global and local features and guiding the model to learn key representative of the lesion. The scheme has been modified such that inputs of grayscale images combining with EGA channels can be trained and fine-tuned like regular RGB image inputs, so the pre-trained model on RGB video sequences can be utilized. Finally, we demonstrate that our new approach significantly outperforms other attention methods on categorizing Stanford Type-A and Type-B aortic dissection where the accuracy of 0.991 and an AUC of 0.994 are achieved in our untrimmed test dataset.
AB - This paper reports an innovative approach to the classification of Stanford Type A and Type B aortic dissection using 3D CNN in conjunction with a novel Guided Attention (GA) mechanism. Recently, Computerized Tomography (CT) scan is increasingly applied for diagnoses of aortic dissection, and AI-assisted technology has been proven effective in increasing the productivity of radiologists. However, the general 3D CNN method even takes advantage of spatial continuity is not able to focus on the torn region of the aorta. In contrast, we propose an innovative approach termed the 'External Guided Attention' (EGA), which is capable of focusing on both global and local features and guiding the model to learn key representative of the lesion. The scheme has been modified such that inputs of grayscale images combining with EGA channels can be trained and fine-tuned like regular RGB image inputs, so the pre-trained model on RGB video sequences can be utilized. Finally, we demonstrate that our new approach significantly outperforms other attention methods on categorizing Stanford Type-A and Type-B aortic dissection where the accuracy of 0.991 and an AUC of 0.994 are achieved in our untrimmed test dataset.
UR - http://www.scopus.com/inward/record.url?scp=85124201398&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124201398&partnerID=8YFLogxK
U2 - 10.1109/BioCAS49922.2021.9644986
DO - 10.1109/BioCAS49922.2021.9644986
M3 - Conference contribution
AN - SCOPUS:85124201398
T3 - BioCAS 2021 - IEEE Biomedical Circuits and Systems Conference, Proceedings
BT - BioCAS 2021 - IEEE Biomedical Circuits and Systems Conference, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 IEEE Biomedical Circuits and Systems Conference, BioCAS 2021
Y2 - 6 October 2021 through 9 October 2021
ER -