Deep learning for breast cancer classification with mammography

Wei Tse Yang, Ting Yu Su, Tsu Chi Cheng, Yi Fei He, Yu-Hua Dean Fang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Current screening of mammography results in a high recall rate. Furthermore, distinguishing between BI-RADS 3 and BI-RADS 4 is a challenge for radiologists. In order to help radiologists' diagnosis, researches of CAD system recently have shown that methods of deep learning can significantly improve lesion detection, segmentation, and classification. However, there is not enough evidence to show that deep learning models can reduce the high recall rate because few researches provide the performance of cases in BI-RADS 3 and BI-RADS 4. Moreover, few researches extended the current models to involve images in CC and MLO in a single prediction. Thus, we proposed convolutional neural networks to classify breast cancer. Our model could predict images in four input sizes. Besides, we extended our model to consider images in CC and MLO in a single prediction. To validate our models, we split the data depending on patients rather than images. Our training set was composed of 4255 images, and test set contained 355 images that were proven by biopsy and callback. The overall performance of human experts yielded on an accuracy of 65.3% while our model achieved a better accuracy of 79.6%. Besides, the performance of cases in BI-RADS 3 and 4 by human experts was accuracy of 54.1%, but our model maintained a high accuracy of 75.7%. When we combined images in CC and MLO in the single prediction, we achieved AUC of 0.86.

Original languageEnglish
Title of host publicationInternational Forum on Medical Imaging in Asia 2019
EditorsJong Hyo Kim, Hiroshi Fujita, Feng Lin
PublisherSPIE
ISBN (Electronic)9781510627758
DOIs
Publication statusPublished - 2019 Jan 1
EventInternational Forum on Medical Imaging in Asia 2019 - Singapore, Singapore
Duration: 2019 Jan 72019 Jan 9

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume11050
ISSN (Print)0277-786X
ISSN (Electronic)1996-756X

Conference

ConferenceInternational Forum on Medical Imaging in Asia 2019
CountrySingapore
CitySingapore
Period19-01-0719-01-09

Fingerprint

Cancer Classification
Mammography
Breast Cancer
breast
learning
cancer
Prediction
Model
predictions
Biopsy
Test Set
computer aided design
Learning
Deep learning
lesions
Screening
Computer aided design
High Accuracy
education
screening

All Science Journal Classification (ASJC) codes

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Computer Science Applications
  • Applied Mathematics
  • Electrical and Electronic Engineering

Cite this

Yang, W. T., Su, T. Y., Cheng, T. C., He, Y. F., & Fang, Y-H. D. (2019). Deep learning for breast cancer classification with mammography. In J. H. Kim, H. Fujita, & F. Lin (Eds.), International Forum on Medical Imaging in Asia 2019 [1105014] (Proceedings of SPIE - The International Society for Optical Engineering; Vol. 11050). SPIE. https://doi.org/10.1117/12.2519603
Yang, Wei Tse ; Su, Ting Yu ; Cheng, Tsu Chi ; He, Yi Fei ; Fang, Yu-Hua Dean. / Deep learning for breast cancer classification with mammography. International Forum on Medical Imaging in Asia 2019. editor / Jong Hyo Kim ; Hiroshi Fujita ; Feng Lin. SPIE, 2019. (Proceedings of SPIE - The International Society for Optical Engineering).
@inproceedings{fcf15cf4cb4a4e49ab24200462e60426,
title = "Deep learning for breast cancer classification with mammography",
abstract = "Current screening of mammography results in a high recall rate. Furthermore, distinguishing between BI-RADS 3 and BI-RADS 4 is a challenge for radiologists. In order to help radiologists' diagnosis, researches of CAD system recently have shown that methods of deep learning can significantly improve lesion detection, segmentation, and classification. However, there is not enough evidence to show that deep learning models can reduce the high recall rate because few researches provide the performance of cases in BI-RADS 3 and BI-RADS 4. Moreover, few researches extended the current models to involve images in CC and MLO in a single prediction. Thus, we proposed convolutional neural networks to classify breast cancer. Our model could predict images in four input sizes. Besides, we extended our model to consider images in CC and MLO in a single prediction. To validate our models, we split the data depending on patients rather than images. Our training set was composed of 4255 images, and test set contained 355 images that were proven by biopsy and callback. The overall performance of human experts yielded on an accuracy of 65.3{\%} while our model achieved a better accuracy of 79.6{\%}. Besides, the performance of cases in BI-RADS 3 and 4 by human experts was accuracy of 54.1{\%}, but our model maintained a high accuracy of 75.7{\%}. When we combined images in CC and MLO in the single prediction, we achieved AUC of 0.86.",
author = "Yang, {Wei Tse} and Su, {Ting Yu} and Cheng, {Tsu Chi} and He, {Yi Fei} and Fang, {Yu-Hua Dean}",
year = "2019",
month = "1",
day = "1",
doi = "10.1117/12.2519603",
language = "English",
series = "Proceedings of SPIE - The International Society for Optical Engineering",
publisher = "SPIE",
editor = "Kim, {Jong Hyo} and Hiroshi Fujita and Feng Lin",
booktitle = "International Forum on Medical Imaging in Asia 2019",
address = "United States",

}

Yang, WT, Su, TY, Cheng, TC, He, YF & Fang, Y-HD 2019, Deep learning for breast cancer classification with mammography. in JH Kim, H Fujita & F Lin (eds), International Forum on Medical Imaging in Asia 2019., 1105014, Proceedings of SPIE - The International Society for Optical Engineering, vol. 11050, SPIE, International Forum on Medical Imaging in Asia 2019, Singapore, Singapore, 19-01-07. https://doi.org/10.1117/12.2519603

Deep learning for breast cancer classification with mammography. / Yang, Wei Tse; Su, Ting Yu; Cheng, Tsu Chi; He, Yi Fei; Fang, Yu-Hua Dean.

International Forum on Medical Imaging in Asia 2019. ed. / Jong Hyo Kim; Hiroshi Fujita; Feng Lin. SPIE, 2019. 1105014 (Proceedings of SPIE - The International Society for Optical Engineering; Vol. 11050).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Deep learning for breast cancer classification with mammography

AU - Yang, Wei Tse

AU - Su, Ting Yu

AU - Cheng, Tsu Chi

AU - He, Yi Fei

AU - Fang, Yu-Hua Dean

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Current screening of mammography results in a high recall rate. Furthermore, distinguishing between BI-RADS 3 and BI-RADS 4 is a challenge for radiologists. In order to help radiologists' diagnosis, researches of CAD system recently have shown that methods of deep learning can significantly improve lesion detection, segmentation, and classification. However, there is not enough evidence to show that deep learning models can reduce the high recall rate because few researches provide the performance of cases in BI-RADS 3 and BI-RADS 4. Moreover, few researches extended the current models to involve images in CC and MLO in a single prediction. Thus, we proposed convolutional neural networks to classify breast cancer. Our model could predict images in four input sizes. Besides, we extended our model to consider images in CC and MLO in a single prediction. To validate our models, we split the data depending on patients rather than images. Our training set was composed of 4255 images, and test set contained 355 images that were proven by biopsy and callback. The overall performance of human experts yielded on an accuracy of 65.3% while our model achieved a better accuracy of 79.6%. Besides, the performance of cases in BI-RADS 3 and 4 by human experts was accuracy of 54.1%, but our model maintained a high accuracy of 75.7%. When we combined images in CC and MLO in the single prediction, we achieved AUC of 0.86.

AB - Current screening of mammography results in a high recall rate. Furthermore, distinguishing between BI-RADS 3 and BI-RADS 4 is a challenge for radiologists. In order to help radiologists' diagnosis, researches of CAD system recently have shown that methods of deep learning can significantly improve lesion detection, segmentation, and classification. However, there is not enough evidence to show that deep learning models can reduce the high recall rate because few researches provide the performance of cases in BI-RADS 3 and BI-RADS 4. Moreover, few researches extended the current models to involve images in CC and MLO in a single prediction. Thus, we proposed convolutional neural networks to classify breast cancer. Our model could predict images in four input sizes. Besides, we extended our model to consider images in CC and MLO in a single prediction. To validate our models, we split the data depending on patients rather than images. Our training set was composed of 4255 images, and test set contained 355 images that were proven by biopsy and callback. The overall performance of human experts yielded on an accuracy of 65.3% while our model achieved a better accuracy of 79.6%. Besides, the performance of cases in BI-RADS 3 and 4 by human experts was accuracy of 54.1%, but our model maintained a high accuracy of 75.7%. When we combined images in CC and MLO in the single prediction, we achieved AUC of 0.86.

UR - http://www.scopus.com/inward/record.url?scp=85063915103&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85063915103&partnerID=8YFLogxK

U2 - 10.1117/12.2519603

DO - 10.1117/12.2519603

M3 - Conference contribution

AN - SCOPUS:85063915103

T3 - Proceedings of SPIE - The International Society for Optical Engineering

BT - International Forum on Medical Imaging in Asia 2019

A2 - Kim, Jong Hyo

A2 - Fujita, Hiroshi

A2 - Lin, Feng

PB - SPIE

ER -

Yang WT, Su TY, Cheng TC, He YF, Fang Y-HD. Deep learning for breast cancer classification with mammography. In Kim JH, Fujita H, Lin F, editors, International Forum on Medical Imaging in Asia 2019. SPIE. 2019. 1105014. (Proceedings of SPIE - The International Society for Optical Engineering). https://doi.org/10.1117/12.2519603