Journal of Computer Science and Technology ›› 2022, Vol. 37 ›› Issue (2): 277-294.doi: 10.1007/s11390-020-0192-0

Special Issue: Artificial Intelligence and Pattern Recognition; Computer Graphics and Multimedia

• Artificial Intelligence and Pattern Recognition •     Next Articles

DG-CNN: Introducing Margin Information into Convolutional Neural Networks for Breast Cancer Diagnosis in Ultrasound Images

Xiao-Zheng Xie1 (解晓政), Jian-Wei Niu1,2 (牛建伟), Senior Member, IEEE, Xue-Feng Liu1,* (刘雪峰), Qing-Feng Li2 (李青锋), Yong Wang3 (王勇), Jie Han3 (韩洁), and Shaojie Tang4 (唐少杰), Member, IEEE        

  1. 1State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing 100191, China
    2Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Hangzhou 310051, China
    3Department of Diagnostic Ultrasound, National Cancer Center, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing 100021, China
    4Naveen Jindal School of Management, The University of Texas at Dallas, TX 75080-3021, U.S.A.
  • Received:2019-11-26 Revised:2020-04-25 Accepted:2020-06-02 Online:2022-03-31 Published:2022-03-31
  • Contact: Xue-Feng Liu E-mail:liu_xuefeng@buaa.edu.cn
  • About author:Xue-Feng Liu received his M.S. and Ph.D. degrees in automatic control and aerospace engineering from the Beijing Institute of Technology, and the University of Bristol, United Kingdom, in 2003 and 2008, respectively. He was an associate professor at the School of Electronics and Information Engineering in the Huazhong University of Science and Technology, Wuhan, from 2008 to 2018. He is currently an associate professor at the School of Computer Science and Engineering, Beihang University, Beijing. His research interests include wireless sensor networks, distributed computing and in-network processing. He has served as a reviewer for several international journals/conference proceedings.
  • Supported by:
    This work was supported by the National Natural Science Foundation of China under Grant Nos. 61976012 and 61772060, the National Key Research and Development Program of China under Grant No. 2017YFB1301100, and China Education and Research Network Innovation Project under Grant No. NGII20170315.

Although using convolutional neural networks (CNN) for computer-aided diagnosis (CAD) has made tremendous progress in the last few years, the small medical datasets remain to be the major bottleneck in this area. To address this problem, researchers start looking for information out of the medical datasets. Previous efforts mainly leverage information from natural images via transfer learning. More recent research work focuses on integrating knowledge from medical practitioners, either letting networks resemble how practitioners are trained, how they view images, or using extra annotations. In this paper, we propose a scheme named Domain Guided-CNN (DG-CNN) to incorporate the margin information, a feature described in the consensus for radiologists to diagnose cancer in breast ultrasound (BUS) images. In DG-CNN, attention maps that highlight margin areas of tumors are first generated, and then incorporated via different approaches into the networks. We have tested the performance of DG-CNN on our own dataset (including 1485 ultrasound images) and on a public dataset. The results show that DG-CNN can be applied to different network structures like VGG and ResNet to improve their performance. For example, experimental results on our dataset show that with a certain integrating mode, the improvement of using DG-CNN over a baseline network structure ResNet18 is 2.17% in accuracy, 1.69% in sensitivity, 2.64% in specificity and 2.57% in AUC (Area Under Curve). To the best of our knowledge, this is the first time that the margin information is utilized to improve the performance of deep neural networks in diagnosing breast cancer in BUS images.

Key words: medical consensus; domain knowledge; breast cancer diagnosis; margin map; deep neural network ;

[1] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2015. https://arxiv.org/abs/1409.1556, Nov. 2021.
[2] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In Proc. the IEEE Conference on Computer Vision and Pattern Recognition, June 2016, pp.770-778. DOI: 10.1109/CVPR.2016.90.
[3] Shin S Y, Lee S, Yun I D, Lee K M. Joint weakly and semi-supervised deep learning for localization and classification of masses in breast ultrasound images. IEEE Trans. Med. Imaging, 2019, 38(3): 762-774. DOI: 10.1109/TMI.2018.2872031.
[4] Xu X, Lu Q, Yang L, Hu S X, Chen D Z, Hu Y, Shi Y. Quantization of fully convolutional networks for accurate biomedical image segmentation. In Proc. the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2018, pp.8300-8308. DOI: 10.1109/CVPR.2018.00866.
[5] Zhou Z, Shin J Y, Zhang L, Gurudu S R, Gotway M B, Liang J. Fine-tuning convolutional neural networks for biomedical image analysis: Actively and incrementally. In Proc. the IEEE Conference on Computer Vision and Pattern Recognition, July 2017, pp.4761-4772. DOI: 10.1109/CVPR.2017.506.
[6] Esteva A, Kuprel B, Novoa R A, Ko J M, Swetter S M, Blau H M, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 2017, 542(7639): 115-118. DOI: 10.1038/nature21056.
[7] Huynh B, Drukker K, Giger M. MO-DE-207B-06: Computer-aided diagnosis of breast ultrasound images using transfer learning from deep convolutional neural networks. Med. Phys., 2016, 43(6): 3705-3705. DOI: 10.1118/1.4957255.
[8] Yap M H, Pons G, Marti J, Ganau S, Sentis M, Zwiggelaar R, Davison A K, Marti R. Automated breast ultrasound lesions detection using convolutional neural networks. IEEE J. Biomed. Health Inform., 2018, 22(4): 1218-1226. DOI: 10.1109/JBHI.2017.2731873.
[9] Tajbakhsh N, Shin J Y, Gurudu S R, Hurst R T, Kendall C B, Gotway M B, Liang J. Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE Trans. Med. Imaging, 2016, 35(5): 1299-1312. DOI: 10.1109/TMI.2016.2535302.
[10] Guan Q, Huang Y, Zhong Z, Zheng Z, Zheng L, Yang Y. Diagnose like a radiologist: Attention guided convolutional neural network for thorax disease classification. arXiv:1801.09927, 2018. https://arxiv.org/abs/1801.09927, Nov. 2021.
[11] González-Díaz I. DermaKNet: Incorporating the knowledge of dermatologists to convolutional neural networks for skin lesion diagnosis. IEEE J. Biomed. Health Inform., 2018, 23(2): 547-559. DOI: 10.1109/JBHI.2018.2806962.
[12] Li L, Xu M, Wang X, Jiang L, Liu H. Attention based glaucoma detection: A large-scale database and CNN model. In Proc. the IEEE Conference on Computer Vision and Pattern Recognition, June 2019, pp.10571-10580. DOI: 10.1109/CVPR.2019.01082.
[13] Fang L, Wang C, Li S, Rabbani H, Chen X, Liu Z. Attention to lesion: Lesion-aware convolutional neural network for retinal optical coherence tomography image classification. IEEE Trans. Med. Imaging, 2019, 38(8): 1959-1970. DOI: 10.1109/TMI.2019.2898414.
[14] Mitsuhara M, Fukui H, Sakashita Y, Ogata T, Hirakawa T, Yamashita T, Fujiyoshi H. Embedding human knowledge in deep neural network via attention map. arXiv:1905.03540, 2019. https://arxiv.org/abs/1905.03540, May 2021.
[15] Dorsi C, Bassett L, Feisg S, Lee C I, Lehman C D, Bassett L W. Breast Imaging Reporting and Data System (BI-RADS). Oxford University Press, 2018.
[16] Bian C, Lee R, Chou Y, Cheng J. Boundary regularized convolutional neural network for layer parsing of breast anatomy in automated whole breast ultrasound. In Proc. the 20th International Conference on Medical Image Computing and Computer-Assisted Intervention, Sept. 2017, pp.259-266. DOI: 10.1007/978-3-319-66179-7\textunderscore 30.
[17] Maicas G, Bradley A P, Nascimento J C, Reid I, Carneiro G. Training medical image analysis systems like radiologists. In Proc. the 21st International Conference on Medical Image Computing and Computer-Assisted Intervention, September 2018, pp.546-554. DOI: 10.1007/978-3-030-00928-1\textunderscore 62.
[18] Liu J, Li W, Zhao N, Cao K, Yin Y, Song Q, Chen H, Gong X. Integrate domain knowledge in training CNN for ultrasonography breast cancer diagnosis. In Proc. the 21st International Conference on Medical Image Computing and Computer-Assisted Intervention, September 2018, pp.868-875. DOI: 10.1007/978-3-030-00934-2\textunderscore 96.
[19] Wang X, Peng Y, Lu Y, Lu Z, Summers R M. TieNet: Text-image embedding network for common thorax disease classification and reporting in chest X-rays. In Proc. the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2018, pp.9049-9058. DOI: 10.1109/CVPR.2018.00943.
[20] Berg W A, Cosgrove D O, Doré C J et al. Shear-wave elastography improves the specificity of breast US: the BE1 multinational study of 939 masses. Radiology, 2012, 262(2): 435-449. DOI: 10.1148/radiol.11110640.
[21] Dobruch-Sobczak K, Piotrzkowska-Wróblewska H, Roszkowska-Purska K, Nowicki A, Jakubowsi W. Usefulness of combined BI-RADS analysis and Nakagami statistics of ultrasound echoes in the diagnosis of breast lesions. Clin. Radiol., 2017, 72(4): 339-339. DOI: 10.1016/j.crad.2016.11.009.
[22] Liu Y, Cheng M M, Hu X, Wang K, Bai X. Richer convolutional features for edge detection. In Proc. the IEEE Conference on Computer Vision and Pattern Recognition, July 2017, pp.5872-5881. DOI: 10.1109/CVPR.2017.622.
[23] Arbeláez P, Maire M, Fowlkes C, Malik J. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33(5): 898-916. DOI: 10.1109/TPAMI.2010.161.
[24] Lin M, Chen Q, Yan S. Network in network. arXiv:1312.4400, 2013. https://arxiv.org/abs/1312.4400, March 2021.
[25] Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A. Learning deep features for discriminative localization. In Proc. the IEEE Conference on Computer Vision and Pattern Recognition, June 2016, pp.2921-2929. DOI: 10.1109/CVPR.2016.319.
[26] Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39(4): 640-651. DOI: 10.1109/TPAMI.2016.2572683.
[27] He K, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. In Proc. the IEEE International Conference on Computer Vision, October 2017, pp.2980-2988. DOI: 10.1109/ICCV.2017.322.
[28] Han S, Kang H K, Jeong J Y et al. A deep learning framework for supporting the classification of breast lesions in ultrasound images. Phys. Med. Biol., 2017, 62(19): 7714-7728. DOI: 10.1088/1361-6560/aa82ec.
[29] Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39(6): 1137-1149. DOI: 10.1109/TPAMI.2016.2577031.
[1] Yue-Huan Wang, Ze-Nan Li, Jing-Wei Xu, Ping Yu, Taolue Chen, Xiao-Xing Ma. Predicted Robustness as QoS for Deep Neural Network Models [J]. Journal of Computer Science and Technology, 2020, 35(5): 999-1015.
[2] Yun-Yun Wang, Jian-Min Gu, Chao Wang, Song-Can Chen, Hui Xue. Discrimination-Aware Domain Adversarial Neural Network [J]. Journal of Computer Science and Technology, 2020, 35(2): 259-267.
[3] Robail Yasrab. SRNET: A Shallow Skip Connection Based Convolutional Neural Network Design for Resolving Singularities [J]. Journal of Computer Science and Technology, 2019, 34(4): 924-938.
[4] Jun Yin, Wayne Xin Zhao, Xiao-Ming Li. Type-Aware Question Answering over Knowledge Base with Attention-Based Tree-Structured Neural Networks [J]. , 2017, 32(4): 805-813.
[5] Ting Bai, Hong-Jian Dou, Wayne Xin Zhao, Ding-Yi Yang, Ji-Rong Wen. An Experimental Study of Text Representation Methods for Cross-Site Purchase Preference Prediction Using the Social Text Data [J]. , 2017, 32(4): 828-842.
[6] LU Ruqian (陆汝钤) and JIN Zhi (金芝). Formal Ontology: Foundation of Domain Knowledge Sharing and Reusing [J]. , 2002, 17(5): 0-0.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
[1] Chen Shihua;. On the Structure of (Weak) Inverses of an (Weakly) Invertible Finite Automaton[J]. , 1986, 1(3): 92 -100 .
[2] Wu Enhua;. A Graphics System Distributed across a Local Area Network[J]. , 1986, 1(3): 53 -64 .
[3] Qu Yanwen;. AGDL: A Definition Language for Attribute Grammars[J]. , 1986, 1(3): 80 -91 .
[4] Chen Zhaoxiong; Gao Qingshi;. A Substitution Based Model for the Implementation of PROLOG——The Design and Implementation of LPROLOG[J]. , 1986, 1(4): 17 -26 .
[5] Tang Tonggao; Zhao Zhaokeng;. Stack Method in Program Semantics[J]. , 1987, 2(1): 51 -63 .
[6] Chen Qiming;. Extending the Object-Oriented Paradigm for Supporting Complex Objects[J]. , 1988, 3(2): 113 -130 .
[7] Wang Hanhu;. Transaction Management in Distributed Database System POREL[J]. , 1988, 3(2): 139 -146 .
[8] Wang Nengbin; Liu Xiaoqing; Liu Guangfu;. A Software Tool for Constructing Traditional Chinese Medical Expert Systems[J]. , 1988, 3(3): 214 -220 .
[9] Liu Dongbo; Li Deyi;. A Fuzzy Proof Theory[J]. , 1990, 5(1): 92 -96 .
[10] Liu Weiyi;. An Efficient Algorithm for Processing Multi-Relation Queries in Relational Databases[J]. , 1990, 5(3): 236 -240 .

ISSN 1000-9000(Print)

         1860-4749(Online)
CN 11-2296/TP

Home
Editorial Board
Author Guidelines
Subscription
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
Tel.:86-10-62610746
E-mail: jcst@ict.ac.cn
 
  Copyright ©2015 JCST, All Rights Reserved