计算机科学技术学报 ›› 2022,Vol. 37 ›› Issue (2): 277-294.doi: 10.1007/s11390-020-0192-0

所属专题: Artificial Intelligence and Pattern Recognition Computer Graphics and Multimedia

• •    下一篇

基于卷积神经网络并融合边界信息的乳腺癌超声图像诊断

  

  • 收稿日期:2019-11-26 修回日期:2020-04-25 接受日期:2020-06-02 出版日期:2022-03-31 发布日期:2022-03-31

DG-CNN: Introducing Margin Information into Convolutional Neural Networks for Breast Cancer Diagnosis in Ultrasound Images

Xiao-Zheng Xie1 (解晓政), Jian-Wei Niu1,2 (牛建伟), Senior Member, IEEE, Xue-Feng Liu1,* (刘雪峰), Qing-Feng Li2 (李青锋), Yong Wang3 (王勇), Jie Han3 (韩洁), and Shaojie Tang4 (唐少杰), Member, IEEE        

  1. 1State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing 100191, China
    2Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Hangzhou 310051, China
    3Department of Diagnostic Ultrasound, National Cancer Center, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing 100021, China
    4Naveen Jindal School of Management, The University of Texas at Dallas, TX 75080-3021, U.S.A.
  • Received:2019-11-26 Revised:2020-04-25 Accepted:2020-06-02 Online:2022-03-31 Published:2022-03-31
  • Contact: Xue-Feng Liu E-mail:liu_xuefeng@buaa.edu.cn
  • About author:Xue-Feng Liu received his M.S. and Ph.D. degrees in automatic control and aerospace engineering from the Beijing Institute of Technology, and the University of Bristol, United Kingdom, in 2003 and 2008, respectively. He was an associate professor at the School of Electronics and Information Engineering in the Huazhong University of Science and Technology, Wuhan, from 2008 to 2018. He is currently an associate professor at the School of Computer Science and Engineering, Beihang University, Beijing. His research interests include wireless sensor networks, distributed computing and in-network processing. He has served as a reviewer for several international journals/conference proceedings.
  • Supported by:
    This work was supported by the National Natural Science Foundation of China under Grant Nos. 61976012 and 61772060, the National Key Research and Development Program of China under Grant No. 2017YFB1301100, and China Education and Research Network Innovation Project under Grant No. NGII20170315.

研究背景
得益于深度学习的飞速发展,基于深度学习,尤其是卷积神经网络的计算机辅助诊断在过去的几年里取得了巨大的进步。然而,小规模的医学数据集仍然是制约这一领域发展的主要瓶颈。为了解决这个问题,研究人员开始尝试从医学数据集中寻找辅助信息。以前的工作主要通过迁移学习来利用自然图像中的信息。最近的研究工作则尝试引入医学从业者的先验知识,或让网络学习医生如何接受培训,如何阅读图像,或让网络使用医生标注的额外注释等。此类信息的引入极大地促进了网络的辅助诊断性能。
目的
我们尝试发现并使用另一种先验知识,并将其应用于乳腺癌超声图像辅助中。具体包括这种先验知识如何表征,如何融入到卷积神经网络中,并验证融入先验知识之后的诊断性能。
方法
在本文中,我们提出了一种称为Domain Guided-CNN(DG-CNN)的方案在基于超声图像的乳腺癌辅助诊断中融入医学先验信息,本文主要是病灶的边界信息。作为放射科医生在乳腺超声图像中诊断癌症的共识中描述的一个特征,边界信息对于最后的诊断结果起到至关重要的作用。在DG-CNN中,我们首先生成描述肿瘤边界区域的注意力图,然后通过不同的方法将其合并到网络中。具体地,我们首先使用不同的产生方式设计了三种不同的边界注意力图,然后设计了四种引入方法,三种直接融合模式和一种多任务学习模式来融入这一信息。
结果
我们分别在自己的数据集(1485幅超声图像)和公共数据集上测试了DG-CNN的性能。结果表明,DG-CNN可以应用于不同的网络结构,如VGG和ResNet,并可以不同程度上提高它们的诊断性能。其中,在我们的数据集上,基于某种特定的信息引入模式,DG-CNN在ResNet18框架上的乳腺癌诊断准确率提高了2.17%,敏感度提高了1.69%,特异度提高了2.64%,AUC值提高了0.0257。
结论

实验表明,从医学常识中提取的先验知识(边缘信息)有助于提升在超声图像中乳腺癌的诊断性能。据我们所知,这是第一次利用边缘信息来提高深度神经网络在超声图像中诊断乳腺癌的性能。同时,我们也相信,在更多的医学辅助诊断领域,若能有效利用先验知识,也将很大程度上提升其诊断效果。


关键词: 医学共识, 先验知识, 乳腺癌诊断, 边界图, 深度神经网络

Abstract:

Although using convolutional neural networks (CNN) for computer-aided diagnosis (CAD) has made tremendous progress in the last few years, the small medical datasets remain to be the major bottleneck in this area. To address this problem, researchers start looking for information out of the medical datasets. Previous efforts mainly leverage information from natural images via transfer learning. More recent research work focuses on integrating knowledge from medical practitioners, either letting networks resemble how practitioners are trained, how they view images, or using extra annotations. In this paper, we propose a scheme named Domain Guided-CNN (DG-CNN) to incorporate the margin information, a feature described in the consensus for radiologists to diagnose cancer in breast ultrasound (BUS) images. In DG-CNN, attention maps that highlight margin areas of tumors are first generated, and then incorporated via different approaches into the networks. We have tested the performance of DG-CNN on our own dataset (including 1485 ultrasound images) and on a public dataset. The results show that DG-CNN can be applied to different network structures like VGG and ResNet to improve their performance. For example, experimental results on our dataset show that with a certain integrating mode, the improvement of using DG-CNN over a baseline network structure ResNet18 is 2.17% in accuracy, 1.69% in sensitivity, 2.64% in specificity and 2.57% in AUC (Area Under Curve). To the best of our knowledge, this is the first time that the margin information is utilized to improve the performance of deep neural networks in diagnosing breast cancer in BUS images.

Key words: medical consensus, domain knowledge, breast cancer diagnosis, margin map, deep neural network

[1] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2015. https://arxiv.org/abs/1409.1556, Nov. 2021.
[2] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In Proc. the IEEE Conference on Computer Vision and Pattern Recognition, June 2016, pp.770-778. DOI: 10.1109/CVPR.2016.90.
[3] Shin S Y, Lee S, Yun I D, Lee K M. Joint weakly and semi-supervised deep learning for localization and classification of masses in breast ultrasound images. IEEE Trans. Med. Imaging, 2019, 38(3): 762-774. DOI: 10.1109/TMI.2018.2872031.
[4] Xu X, Lu Q, Yang L, Hu S X, Chen D Z, Hu Y, Shi Y. Quantization of fully convolutional networks for accurate biomedical image segmentation. In Proc. the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2018, pp.8300-8308. DOI: 10.1109/CVPR.2018.00866.
[5] Zhou Z, Shin J Y, Zhang L, Gurudu S R, Gotway M B, Liang J. Fine-tuning convolutional neural networks for biomedical image analysis: Actively and incrementally. In Proc. the IEEE Conference on Computer Vision and Pattern Recognition, July 2017, pp.4761-4772. DOI: 10.1109/CVPR.2017.506.
[6] Esteva A, Kuprel B, Novoa R A, Ko J M, Swetter S M, Blau H M, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 2017, 542(7639): 115-118. DOI: 10.1038/nature21056.
[7] Huynh B, Drukker K, Giger M. MO-DE-207B-06: Computer-aided diagnosis of breast ultrasound images using transfer learning from deep convolutional neural networks. Med. Phys., 2016, 43(6): 3705-3705. DOI: 10.1118/1.4957255.
[8] Yap M H, Pons G, Marti J, Ganau S, Sentis M, Zwiggelaar R, Davison A K, Marti R. Automated breast ultrasound lesions detection using convolutional neural networks. IEEE J. Biomed. Health Inform., 2018, 22(4): 1218-1226. DOI: 10.1109/JBHI.2017.2731873.
[9] Tajbakhsh N, Shin J Y, Gurudu S R, Hurst R T, Kendall C B, Gotway M B, Liang J. Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE Trans. Med. Imaging, 2016, 35(5): 1299-1312. DOI: 10.1109/TMI.2016.2535302.
[10] Guan Q, Huang Y, Zhong Z, Zheng Z, Zheng L, Yang Y. Diagnose like a radiologist: Attention guided convolutional neural network for thorax disease classification. arXiv:1801.09927, 2018. https://arxiv.org/abs/1801.09927, Nov. 2021.
[11] González-Díaz I. DermaKNet: Incorporating the knowledge of dermatologists to convolutional neural networks for skin lesion diagnosis. IEEE J. Biomed. Health Inform., 2018, 23(2): 547-559. DOI: 10.1109/JBHI.2018.2806962.
[12] Li L, Xu M, Wang X, Jiang L, Liu H. Attention based glaucoma detection: A large-scale database and CNN model. In Proc. the IEEE Conference on Computer Vision and Pattern Recognition, June 2019, pp.10571-10580. DOI: 10.1109/CVPR.2019.01082.
[13] Fang L, Wang C, Li S, Rabbani H, Chen X, Liu Z. Attention to lesion: Lesion-aware convolutional neural network for retinal optical coherence tomography image classification. IEEE Trans. Med. Imaging, 2019, 38(8): 1959-1970. DOI: 10.1109/TMI.2019.2898414.
[14] Mitsuhara M, Fukui H, Sakashita Y, Ogata T, Hirakawa T, Yamashita T, Fujiyoshi H. Embedding human knowledge in deep neural network via attention map. arXiv:1905.03540, 2019. https://arxiv.org/abs/1905.03540, May 2021.
[15] Dorsi C, Bassett L, Feisg S, Lee C I, Lehman C D, Bassett L W. Breast Imaging Reporting and Data System (BI-RADS). Oxford University Press, 2018.
[16] Bian C, Lee R, Chou Y, Cheng J. Boundary regularized convolutional neural network for layer parsing of breast anatomy in automated whole breast ultrasound. In Proc. the 20th International Conference on Medical Image Computing and Computer-Assisted Intervention, Sept. 2017, pp.259-266. DOI: 10.1007/978-3-319-66179-7\textunderscore 30.
[17] Maicas G, Bradley A P, Nascimento J C, Reid I, Carneiro G. Training medical image analysis systems like radiologists. In Proc. the 21st International Conference on Medical Image Computing and Computer-Assisted Intervention, September 2018, pp.546-554. DOI: 10.1007/978-3-030-00928-1\textunderscore 62.
[18] Liu J, Li W, Zhao N, Cao K, Yin Y, Song Q, Chen H, Gong X. Integrate domain knowledge in training CNN for ultrasonography breast cancer diagnosis. In Proc. the 21st International Conference on Medical Image Computing and Computer-Assisted Intervention, September 2018, pp.868-875. DOI: 10.1007/978-3-030-00934-2\textunderscore 96.
[19] Wang X, Peng Y, Lu Y, Lu Z, Summers R M. TieNet: Text-image embedding network for common thorax disease classification and reporting in chest X-rays. In Proc. the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2018, pp.9049-9058. DOI: 10.1109/CVPR.2018.00943.
[20] Berg W A, Cosgrove D O, Doré C J et al. Shear-wave elastography improves the specificity of breast US: the BE1 multinational study of 939 masses. Radiology, 2012, 262(2): 435-449. DOI: 10.1148/radiol.11110640.
[21] Dobruch-Sobczak K, Piotrzkowska-Wróblewska H, Roszkowska-Purska K, Nowicki A, Jakubowsi W. Usefulness of combined BI-RADS analysis and Nakagami statistics of ultrasound echoes in the diagnosis of breast lesions. Clin. Radiol., 2017, 72(4): 339-339. DOI: 10.1016/j.crad.2016.11.009.
[22] Liu Y, Cheng M M, Hu X, Wang K, Bai X. Richer convolutional features for edge detection. In Proc. the IEEE Conference on Computer Vision and Pattern Recognition, July 2017, pp.5872-5881. DOI: 10.1109/CVPR.2017.622.
[23] Arbeláez P, Maire M, Fowlkes C, Malik J. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33(5): 898-916. DOI: 10.1109/TPAMI.2010.161.
[24] Lin M, Chen Q, Yan S. Network in network. arXiv:1312.4400, 2013. https://arxiv.org/abs/1312.4400, March 2021.
[25] Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A. Learning deep features for discriminative localization. In Proc. the IEEE Conference on Computer Vision and Pattern Recognition, June 2016, pp.2921-2929. DOI: 10.1109/CVPR.2016.319.
[26] Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39(4): 640-651. DOI: 10.1109/TPAMI.2016.2572683.
[27] He K, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. In Proc. the IEEE International Conference on Computer Vision, October 2017, pp.2980-2988. DOI: 10.1109/ICCV.2017.322.
[28] Han S, Kang H K, Jeong J Y et al. A deep learning framework for supporting the classification of breast lesions in ultrasound images. Phys. Med. Biol., 2017, 62(19): 7714-7728. DOI: 10.1088/1361-6560/aa82ec.
[29] Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39(6): 1137-1149. DOI: 10.1109/TPAMI.2016.2577031.
[1] Cui-Cui Zhang, Zhi-Lei Liu. 基于Helmholtz-Hodge分解的运动物体场构图算法在无先验知识混合运动场分割中的应用研究[J]. , 2017, 32(3): 520-535.
[2] Jun-Gang Xu, Yue Zhao, Jian Chen, Chao Han. 一种基于先验知识的贝叶斯网结构学习算法[J]. , 2015, 30(4): 713-724.
[3] Lei Fang, Biao Liu, Min-Lie Huang. 基于弱监督信息和大量数据抽取评论的特征词和情感词[J]. , 2015, 30(4): 903-916.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
[1] 陈世华;. On the Structure of (Weak) Inverses of an (Weakly) Invertible Finite Automaton[J]. , 1986, 1(3): 92 -100 .
[2] 吴恩华;. A Graphics System Distributed across a Local Area Network[J]. , 1986, 1(3): 53 -64 .
[3] 屈延文;. AGDL: A Definition Language for Attribute Grammars[J]. , 1986, 1(3): 80 -91 .
[4] 陈肇雄; 高庆狮;. A Substitution Based Model for the Implementation of PROLOG——The Design and Implementation of LPROLOG[J]. , 1986, 1(4): 17 -26 .
[5] 唐同诰; 招兆铿;. Stack Method in Program Semantics[J]. , 1987, 2(1): 51 -63 .
[6] 陈其明;. Extending the Object-Oriented Paradigm for Supporting Complex Objects[J]. , 1988, 3(2): 113 -130 .
[7] 王翰虎;. Transaction Management in Distributed Database System POREL[J]. , 1988, 3(2): 139 -146 .
[8] 王能斌; 刘小青; 刘光富;. A Software Tool for Constructing Traditional Chinese Medical Expert Systems[J]. , 1988, 3(3): 214 -220 .
[9] 刘东波; 李德毅;. A Fuzzy Proof Theory[J]. , 1990, 5(1): 92 -96 .
[10] 刘惟一;. An Efficient Algorithm for Processing Multi-Relation Queries in Relational Databases[J]. , 1990, 5(3): 236 -240 .
版权所有 © 《计算机科学技术学报》编辑部
本系统由北京玛格泰克科技发展有限公司设计开发 技术支持:support@magtech.com.cn
总访问量: