|
计算机科学技术学报 ›› 2022,Vol. 37 ›› Issue (2): 277-294.doi: 10.1007/s11390-020-0192-0
所属专题: Artificial Intelligence and Pattern Recognition; Computer Graphics and Multimedia
• • 下一篇
Xiao-Zheng Xie1 (解晓政), Jian-Wei Niu1,2 (牛建伟), Senior Member, IEEE, Xue-Feng Liu1,* (刘雪峰), Qing-Feng Li2 (李青锋), Yong Wang3 (王勇), Jie Han3 (韩洁), and Shaojie Tang4 (唐少杰), Member, IEEE
研究背景
得益于深度学习的飞速发展,基于深度学习,尤其是卷积神经网络的计算机辅助诊断在过去的几年里取得了巨大的进步。然而,小规模的医学数据集仍然是制约这一领域发展的主要瓶颈。为了解决这个问题,研究人员开始尝试从医学数据集中寻找辅助信息。以前的工作主要通过迁移学习来利用自然图像中的信息。最近的研究工作则尝试引入医学从业者的先验知识,或让网络学习医生如何接受培训,如何阅读图像,或让网络使用医生标注的额外注释等。此类信息的引入极大地促进了网络的辅助诊断性能。
目的
我们尝试发现并使用另一种先验知识,并将其应用于乳腺癌超声图像辅助中。具体包括这种先验知识如何表征,如何融入到卷积神经网络中,并验证融入先验知识之后的诊断性能。
方法
在本文中,我们提出了一种称为Domain Guided-CNN(DG-CNN)的方案在基于超声图像的乳腺癌辅助诊断中融入医学先验信息,本文主要是病灶的边界信息。作为放射科医生在乳腺超声图像中诊断癌症的共识中描述的一个特征,边界信息对于最后的诊断结果起到至关重要的作用。在DG-CNN中,我们首先生成描述肿瘤边界区域的注意力图,然后通过不同的方法将其合并到网络中。具体地,我们首先使用不同的产生方式设计了三种不同的边界注意力图,然后设计了四种引入方法,三种直接融合模式和一种多任务学习模式来融入这一信息。
结果
我们分别在自己的数据集(1485幅超声图像)和公共数据集上测试了DG-CNN的性能。结果表明,DG-CNN可以应用于不同的网络结构,如VGG和ResNet,并可以不同程度上提高它们的诊断性能。其中,在我们的数据集上,基于某种特定的信息引入模式,DG-CNN在ResNet18框架上的乳腺癌诊断准确率提高了2.17%,敏感度提高了1.69%,特异度提高了2.64%,AUC值提高了0.0257。
结论
实验表明,从医学常识中提取的先验知识(边缘信息)有助于提升在超声图像中乳腺癌的诊断性能。据我们所知,这是第一次利用边缘信息来提高深度神经网络在超声图像中诊断乳腺癌的性能。同时,我们也相信,在更多的医学辅助诊断领域,若能有效利用先验知识,也将很大程度上提升其诊断效果。
[1] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2015. https://arxiv.org/abs/1409.1556, Nov. 2021. [2] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In Proc. the IEEE Conference on Computer Vision and Pattern Recognition, June 2016, pp.770-778. DOI: 10.1109/CVPR.2016.90. [3] Shin S Y, Lee S, Yun I D, Lee K M. Joint weakly and semi-supervised deep learning for localization and classification of masses in breast ultrasound images. IEEE Trans. Med. Imaging, 2019, 38(3): 762-774. DOI: 10.1109/TMI.2018.2872031. [4] Xu X, Lu Q, Yang L, Hu S X, Chen D Z, Hu Y, Shi Y. Quantization of fully convolutional networks for accurate biomedical image segmentation. In Proc. the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2018, pp.8300-8308. DOI: 10.1109/CVPR.2018.00866. [5] Zhou Z, Shin J Y, Zhang L, Gurudu S R, Gotway M B, Liang J. Fine-tuning convolutional neural networks for biomedical image analysis: Actively and incrementally. In Proc. the IEEE Conference on Computer Vision and Pattern Recognition, July 2017, pp.4761-4772. DOI: 10.1109/CVPR.2017.506. [6] Esteva A, Kuprel B, Novoa R A, Ko J M, Swetter S M, Blau H M, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 2017, 542(7639): 115-118. DOI: 10.1038/nature21056. [7] Huynh B, Drukker K, Giger M. MO-DE-207B-06: Computer-aided diagnosis of breast ultrasound images using transfer learning from deep convolutional neural networks. Med. Phys., 2016, 43(6): 3705-3705. DOI: 10.1118/1.4957255. [8] Yap M H, Pons G, Marti J, Ganau S, Sentis M, Zwiggelaar R, Davison A K, Marti R. Automated breast ultrasound lesions detection using convolutional neural networks. IEEE J. Biomed. Health Inform., 2018, 22(4): 1218-1226. DOI: 10.1109/JBHI.2017.2731873. [9] Tajbakhsh N, Shin J Y, Gurudu S R, Hurst R T, Kendall C B, Gotway M B, Liang J. Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE Trans. Med. Imaging, 2016, 35(5): 1299-1312. DOI: 10.1109/TMI.2016.2535302. [10] Guan Q, Huang Y, Zhong Z, Zheng Z, Zheng L, Yang Y. Diagnose like a radiologist: Attention guided convolutional neural network for thorax disease classification. arXiv:1801.09927, 2018. https://arxiv.org/abs/1801.09927, Nov. 2021. [11] González-Díaz I. DermaKNet: Incorporating the knowledge of dermatologists to convolutional neural networks for skin lesion diagnosis. IEEE J. Biomed. Health Inform., 2018, 23(2): 547-559. DOI: 10.1109/JBHI.2018.2806962. [12] Li L, Xu M, Wang X, Jiang L, Liu H. Attention based glaucoma detection: A large-scale database and CNN model. In Proc. the IEEE Conference on Computer Vision and Pattern Recognition, June 2019, pp.10571-10580. DOI: 10.1109/CVPR.2019.01082. [13] Fang L, Wang C, Li S, Rabbani H, Chen X, Liu Z. Attention to lesion: Lesion-aware convolutional neural network for retinal optical coherence tomography image classification. IEEE Trans. Med. Imaging, 2019, 38(8): 1959-1970. DOI: 10.1109/TMI.2019.2898414. [14] Mitsuhara M, Fukui H, Sakashita Y, Ogata T, Hirakawa T, Yamashita T, Fujiyoshi H. Embedding human knowledge in deep neural network via attention map. arXiv:1905.03540, 2019. https://arxiv.org/abs/1905.03540, May 2021. [15] Dorsi C, Bassett L, Feisg S, Lee C I, Lehman C D, Bassett L W. Breast Imaging Reporting and Data System (BI-RADS). Oxford University Press, 2018. [16] Bian C, Lee R, Chou Y, Cheng J. Boundary regularized convolutional neural network for layer parsing of breast anatomy in automated whole breast ultrasound. In Proc. the 20th International Conference on Medical Image Computing and Computer-Assisted Intervention, Sept. 2017, pp.259-266. DOI: 10.1007/978-3-319-66179-7\textunderscore 30. [17] Maicas G, Bradley A P, Nascimento J C, Reid I, Carneiro G. Training medical image analysis systems like radiologists. In Proc. the 21st International Conference on Medical Image Computing and Computer-Assisted Intervention, September 2018, pp.546-554. DOI: 10.1007/978-3-030-00928-1\textunderscore 62. [18] Liu J, Li W, Zhao N, Cao K, Yin Y, Song Q, Chen H, Gong X. Integrate domain knowledge in training CNN for ultrasonography breast cancer diagnosis. In Proc. the 21st International Conference on Medical Image Computing and Computer-Assisted Intervention, September 2018, pp.868-875. DOI: 10.1007/978-3-030-00934-2\textunderscore 96. [19] Wang X, Peng Y, Lu Y, Lu Z, Summers R M. TieNet: Text-image embedding network for common thorax disease classification and reporting in chest X-rays. In Proc. the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2018, pp.9049-9058. DOI: 10.1109/CVPR.2018.00943. [20] Berg W A, Cosgrove D O, Doré C J et al. Shear-wave elastography improves the specificity of breast US: the BE1 multinational study of 939 masses. Radiology, 2012, 262(2): 435-449. DOI: 10.1148/radiol.11110640. [21] Dobruch-Sobczak K, Piotrzkowska-Wróblewska H, Roszkowska-Purska K, Nowicki A, Jakubowsi W. Usefulness of combined BI-RADS analysis and Nakagami statistics of ultrasound echoes in the diagnosis of breast lesions. Clin. Radiol., 2017, 72(4): 339-339. DOI: 10.1016/j.crad.2016.11.009. [22] Liu Y, Cheng M M, Hu X, Wang K, Bai X. Richer convolutional features for edge detection. In Proc. the IEEE Conference on Computer Vision and Pattern Recognition, July 2017, pp.5872-5881. DOI: 10.1109/CVPR.2017.622. [23] Arbeláez P, Maire M, Fowlkes C, Malik J. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33(5): 898-916. DOI: 10.1109/TPAMI.2010.161. [24] Lin M, Chen Q, Yan S. Network in network. arXiv:1312.4400, 2013. https://arxiv.org/abs/1312.4400, March 2021. [25] Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A. Learning deep features for discriminative localization. In Proc. the IEEE Conference on Computer Vision and Pattern Recognition, June 2016, pp.2921-2929. DOI: 10.1109/CVPR.2016.319. [26] Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39(4): 640-651. DOI: 10.1109/TPAMI.2016.2572683. [27] He K, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. In Proc. the IEEE International Conference on Computer Vision, October 2017, pp.2980-2988. DOI: 10.1109/ICCV.2017.322. [28] Han S, Kang H K, Jeong J Y et al. A deep learning framework for supporting the classification of breast lesions in ultrasound images. Phys. Med. Biol., 2017, 62(19): 7714-7728. DOI: 10.1088/1361-6560/aa82ec. [29] Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39(6): 1137-1149. DOI: 10.1109/TPAMI.2016.2577031. |
[1] | Cui-Cui Zhang, Zhi-Lei Liu. 基于Helmholtz-Hodge分解的运动物体场构图算法在无先验知识混合运动场分割中的应用研究[J]. , 2017, 32(3): 520-535. |
[2] | Jun-Gang Xu, Yue Zhao, Jian Chen, Chao Han. 一种基于先验知识的贝叶斯网结构学习算法[J]. , 2015, 30(4): 713-724. |
[3] | Lei Fang, Biao Liu, Min-Lie Huang. 基于弱监督信息和大量数据抽取评论的特征词和情感词[J]. , 2015, 30(4): 903-916. |
版权所有 © 《计算机科学技术学报》编辑部 本系统由北京玛格泰克科技发展有限公司设计开发 技术支持:support@magtech.com.cn 总访问量: |