计算机科学技术学报 ›› 2021,Vol. 36 ›› Issue (2): 347-360.doi: 10.1007/s11390-021-0849-3

所属专题: Emerging Areas

• • 上一篇    下一篇

CytoBrain:基于深度学习技术的宫颈癌筛查系统

Hua Chen1, Juan Liu1,*, Senior Member, CCF, Qing-Man Wen1, Zhi-Qun Zuo1, Jia-Sheng Liu1, Jing Feng1, Bao-Chuan Pang2, and Di Xiao2   

  1. 1 Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan 430072, China;
    2 Landing Artificial Intelligence Center for Pathological Diagnosis, Wuhan University, Wuhan 430072, China
  • 收稿日期:2020-07-30 修回日期:2021-02-17 出版日期:2021-03-05 发布日期:2021-04-01
  • 通讯作者: Juan Liu E-mail:liujuan@whu.edu.cn
  • 作者简介:Hua Chen is a Ph.D. candidate in the School of Computer Science, Wuhan University, Wuhan. His current research interests include deep learning, medical image processing, and image classification and segmentation.
  • 基金资助:
    This work was supported by the Major Projects of Technological Innovation in Hubei Province of China under Grant Nos. 2019AEA170 and 2019ACA161, the Frontier Projects of Wuhan for Application Foundation under Grant No. 2019010701011381, and the Translational Medicine and Interdisciplinary Research Joint Fund of Zhongnan Hospital of Wuhan University under Grant No. ZNJC201919.

CytoBrain: Cervical Cancer Screening System Based on Deep Learning Technology

Hua Chen1, Juan Liu1,*, Senior Member, CCF, Qing-Man Wen1, Zhi-Qun Zuo1, Jia-Sheng Liu1, Jing Feng1, Bao-Chuan Pang2, and Di Xiao2        

  1. 1 Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan 430072, China;
    2 Landing Artificial Intelligence Center for Pathological Diagnosis, Wuhan University, Wuhan 430072, China
  • Received:2020-07-30 Revised:2021-02-17 Online:2021-03-05 Published:2021-04-01
  • Contact: Juan Liu E-mail:liujuan@whu.edu.cn
  • About author:Hua Chen is a Ph.D. candidate in the School of Computer Science, Wuhan University, Wuhan. His current research interests include deep learning, medical image processing, and image classification and segmentation.
  • Supported by:
    This work was supported by the Major Projects of Technological Innovation in Hubei Province of China under Grant Nos. 2019AEA170 and 2019ACA161, the Frontier Projects of Wuhan for Application Foundation under Grant No. 2019010701011381, and the Translational Medicine and Interdisciplinary Research Joint Fund of Zhongnan Hospital of Wuhan University under Grant No. ZNJC201919.

1、研究背景(context)
宫颈癌是女性发病率和死亡率最高的恶性肿瘤之一。宫颈癌的早发现早治疗能显著降低死亡率。基于宫颈涂片的细胞学检查是临床上最常采用的宫颈癌检测方法之一。但传统的通过专业的细胞病理学医师在显微镜下观察涂片的诊断方式存在人工成本高、主观性强、效率低下等缺陷,不适用于对普通人群的大规模体检筛查。
2、目的(Objective)
本研究旨在利用计算机技术开发诊断效率高、人工成本低、结果客观且能适用于大规模应用场景的宫颈癌自动筛查系统。
3、方法(Method)
本文基于图像处理和深度学习等技术开发了一个高效的,可用于大规模人群的宫颈癌细胞学自动筛查系统CytoBrain,并已经部署在云端。该系统主要包含三个功能模块:(1)宫颈涂片全景图(WSI)中的细胞自动定位和分割模块;(2)细胞自动分类模块;(3)基于细胞分类结果的宫颈涂片WSI自动诊断与可视化交互模块。
宫颈细胞自动定位与分割模块主要完成从WSI中宫颈细胞图像的快速定位与提取。由于单幅WSI中一般包含上万个细胞图像,为了提高CytoBrain的整体性能,本文提出了一种简单高效的细胞自动定位与提取方法。虽然不同细胞的细胞核在大小和形状上相对一致,但不同宫颈细胞在大小和形态上差异很大,因此对包含细胞质在内的细胞整体进行分割不仅耗时长,且分割精度不高,很难应用于实际应用系统。基于细胞癌变特征主要体现在细胞核上这一生物医学共识,本文提出基于细胞核定位和分割的快速细胞提取方法。该方法首先基于SURF特征点检测算法的进行细胞核的快速定位,然后OTSU算法和形态学操作获得完整的细胞核区域;最后根据对图像放大倍数与细胞核大小关系的统计先验,直接以细胞核的关键特征点为中心,切割特定大小的矩形区域作为细胞图像。该方法得到的单个细胞图像能确保包含完整的细胞核但不一定会保留完整的细胞。尽管如此,该方法相比其他复杂的细胞分割算法更高效且基本不影响细胞的分类结果,因此更适用于实际的应用需求。
细胞自动分类模块是CytoBrain的关键部分,主要利用分类模型对每个宫颈细胞进行自动分类。深度学习算法在分类器构建过程中能自动学习数据的表示特征,因此本文采用深度学习宫颈细胞分类模型。由于分类模型的准确性和执行效率对于大规模宫颈癌筛查的应用至关重要,本文基于效率和精度都具有优势的VGG网络结构,提出了一种紧凑型VGG网络,CompactVGG。该模型主要由10个卷积层,4个最大池化层和2个全连接层构成。同其他的VGG模型相比,CompactVGG模型具有更小的宽度和深度,因此能降低模型的计算开销。模型训练的本质是学习训练数据的分布,并最终获得同输入相对应的输出。显然,如果模型不同层的数据分布之间存在较大差异,往往需要更多轮的训练才能使模型达到同样的精度。为了提高模型训练的收敛速度,本文在每层卷积后引入批量标准化操作,减少不同层卷积结果之间的分布差异,从而提供模型的训练收敛速度。同时,为了防止训练过程过拟合,本文除了保留早停策略外,还在损失函数中引入L2-正则化项,进一步降低模型的过拟合风险,从而提高模型的分类精度和模型的鲁棒性,是其更适用于大规模宫颈癌筛查的实际应用场景。
WSI辅助诊断和可视化交互模块首先基于WSI中所有细胞的分类结果,并结合临床经验知识,进行WSI的自动诊断;然后提供可视化交互界面让用户对结果进行复核,必要时可进行修改。该模块主要偏向于软件开发,因此本文并不探讨其具体实现细节。大体上,该模块提供细胞显示和WSI显示两种模式。用户可以查看细胞或WSI的放大图像,复核分类结果,并可进行结果的修正。
鉴于目前公开的宫颈细胞图片数据集很少,且数据集规模不大,本文在遵循医学伦理要求的前提下,通过回顾性研究,收集了来自2312个受检者的宫颈细胞学WSI图像,并通过本文提出的细胞自动定位与提取算法,获得细胞图像,构建自有宫颈细胞数据集。数据集中每张细胞图像由三位资深医师背靠背人工标注为positive,negative或者junk,多数专家一致的结果作为该细胞图像的最终标签。
4、结果(Result&Findings)
构建了来自2312个受检者的宫颈细胞图像数据集,总共包含198 952个细胞图像,其中60238个为positive;25001个为negative,113713个为junk。该数据集是迄今为止我们知道的最大规模数据集。
本文在自有数据集及在Herlev和SIPaKMeD两个公开数据集上,对CompactVGG模型的时间性能和分类精度进行对比评估实验。同VGG系列模型中时间效率最高的VGG11网络在三个数据集上运行效率的对比实验结果表明,CompactVGG在训练速度和样本测试速度上均优于VGG11。在相同的实验环境和实验设置下,CompactVGG平均每轮训练时间大约为VGG11的58.62%(自有数据),73.63%(Herlev)和72.15%(SIPaKMeD);训练好的CompactVGG模型对每个细胞的平均分类时间也均少于VGG11。同时,将CompactVGG和其他三个代表性的深度学习模型在三个数据集上进行分类性能的对比实验结果表明,CompactVGG均具有明显优势。在自有数据集和Herlev数据集上,CompactVGG在5个性能指标上均取得最高值。在SIPaKMeD数据集上,所有方法均取达到不错的总体性能分数,F1分数值均超过0.98,但在该数据集中5类细胞的各自分类精度上,其他方法性能波动较大,而CompactVGG的分类精度稳定在前两位(3类细胞的分类精度最高,另2类细胞分类精度第二高),CompacVGG模型比其他模型更鲁棒。上述实验结果表明,基于CompactVGG的宫颈细胞分类模型更适用于大规模宫颈癌筛查的实际应用场景。
5、结论(Conclusions)
本文构建的宫颈癌大规模数据集一方面有助于构建泛化能力更强、可实用的宫颈细胞分类模型,同时对作为标准数据集供相关领域研究者实用;本文的细胞定位和提取方法简单高效,实用性强;本文提出的CompactVGG模型具有执行效率高、分类性能好的优点。集成细胞快速提取方法和CompactVGG模型的CytoBrain系统可以快速、有效、低成本地为细胞病理学家提供宫颈癌的辅助诊断,满足大规模宫颈癌筛查的应用需求。由于目前本文收集的数据集中细胞分为positive,negative,junk三类,因此CytoBrain目前尚不能对positive的诊断结果进行更为精细的分类,未来将进一步在精细化分类方面对系统及数据集进行优化,使其能满足更多临床应用的需求。

关键词: 宫颈癌筛查, VGG模型, 深度学习, 人工智能, 分类

Abstract: Identification of abnormal cervical cells is a significant problem in computer-aided diagnosis of cervical cancer. In this study, we develop an artificial intelligence (AI) system, named CytoBrain, to automatically screen abnormal cervical cells to help facilitate the subsequent clinical diagnosis of the subjects. The system consists of three main modules: 1) the cervical cell segmentation module which is responsible for efficiently extracting cell images in a whole slide image (WSI); 2) the cell classification module based on a compact visual geometry group (VGG) network called CompactVGG which is the key part of the system and is used for building the cell classifier; 3) the visualized human-aided diagnosis module which can automatically diagnose a WSI based on the classification results of cells in it, and provide two visual display modes for users to review and modify. For model construction and validation, we have developed a dataset containing 198 952 cervical cell images (60 238 positive, 25 001 negative, and 113 713 junk) from samples of 2 312 adult women. Since CompactVGG is the key part of CytoBrain, we conduct comparison experiments to evaluate its time and classification performance on our developed dataset and two public datasets separately. The comparison results with VGG11, the most efficient one in the family of VGG networks, show that CompactVGG takes less time for either model training or sample testing. Compared with three sophisticated deep learning models, CompactVGG consistently achieves the best classification performance. The results illustrate that the system based on CompactVGG is efficient and effective and can support for large-scale cervical cancer screening.

Key words: cervical cancer screening, visual geometry group (VGG), deep learning, artificial intelligence (AI), classification

[1] Jemal A, Center M M, DeSantis C, Ward E M. Global patterns of cancer incidence and mortality rates and trends. Cancer Epidemiology Biomarkers and Prevention, 2010, 19(8):1893-1907. DOI:10.1158/1055-9965.EPI-10-0437.
[2] Prat J, Franceschi S. Cancers of the female reproductive organs. In World Cancer Report 2014, Stewart B W, Wild C P (eds.), International Agency for Research on Cancer, 2014, pp.465-481.
[3] Adem K, Kiliçarslan S, Cömert O. Classification and diagnosis of cervical cancer with stacked autoencoder and softmax classification. Expert Systems with Applications, 2019, 115:557-564. DOI:10.1016/j.eswa.2018.08.050.
[4] Kurnianingsih, Allehaibi K H S, Nugroho L E, Widyawan, Lazuardi L, Prabuwono A S, Mantoro T. Segmentation and classification of cervical cells using deep learning. IEEE Access, 2019, 7(99):116925-116941. DOI:10.1109/ACCESS.2019.2936017.
[5] Bray F, Ferlay J, Soerjomataram I, Siegel R L, Torre L A, Jemal A. Global cancer statistics 2018:GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA:A Cancer Journal for Clinicians, 2018, 68(6):394-424. DOI:10.3322/caac.21492.
[6] Wittet S, Goltz S, Cody A. Progress in cervical cancer prevention:The CCA report card. Technical Report, Cervical Cancer Action, 2011. https://path.azureedge.net/media/documents/RHccareportcard.pdf, Mar. 2020.
[7] Schwaiger C, Aruda M, Lacoursiere S, Rubin R. Current guidelines for cervical cancer screening. Journal of the American Academy of Nurse Practitioners, 2012, 24(7):417-424. DOI:10.1111/j.1745-7599.2012.00704.x.
[8] William W, Ware A, Basaza-Ejiri A H, Obungoloch J. A review of image analysis and machine learning techniques for automated cervical cancer screening from pap-smear images. Computer Methods and Programs in Biomedicine, 2018, 164:15-22. DOI:10.1016/j.cmpb.2018.05.034.
[9] William W, Ware A, Basaza-Ejiri A H, Obungoloch J. Cervical cancer classification from Pap-smears using an enhanced fuzzy C-means algorithm. Informatics in Medicine Unlocked, 2019, 14:23-33. DOI:10.1016/j.imu.2019.02.001.
[10] Bora K, Chowdhury M, Mahanta L B, Kundu M K, Das A K. Automated classification of Pap smear images to detect cervical dysplasia. Computer Methods and Programs in Biomedicine, 2017, 138:31-47. DOI:10.1016/j.cmpb.2016.10.001.
[11] McDonald J T, Kennedy S. Cervical cancer screening by immigrant and minority women in Canada. Journal of Immigrant and Minority Health, 2007, 9(4):323-334. DOI:10.1007/s10903-007-9046-x.
[12] Elsheikh T M, Austin R M, Chhieng D F, Miller F S, Moriarty A T, Renshaw A A. American society of cytopathology workload recommendations for automated Pap test screening:Developed by the productivity and quality assurance in the era of automated screening task force. Diagnostic Cytopathology, 2013, 41(2):174-178. DOI:10.1002/dc.22817.
[13] Yamal J M, Guillaud M, Atkinson E N, Follen M, MacAulay C, Cantor S B, Cox D D. Prediction using hierarchical data:Applications for automated detection of cervical cancer. Statistical Analysis and Data Mining, 2015, 8(2):65-74. DOI:10.1002/sam.11261.
[14] Su J, Xu X, He Y, Song J. Automatic detection of cervical cancer cells by a two-level cascade classification system. Analytical Cellular Pathology, 2016, 2016:Article No. 9535027. DOI:10.1155/2016/9535027.
[15] Kurniawati Y E, Permanasari A E, Fauziati S. Comparative study on data mining classification methods for cervical cancer prediction using pap smear results. In Proc. the 1st International Conference on Biomedical Engineering, Oct. 2016. DOI:10.1109/IBIOMED.2016.7869827.
[16] Sharma M, Singh S K, Agrawal P, Madaan V. Classification of clinical dataset of cervical cancer using KNN. Indian Journal of Science and Technology, 2016, 9(28):1-5. DOI:10.17485/ijst/2016/v9i28/98380.
[17] Liu Y, Zhang P, Song Q, Li A, Zhang P, Gui Z. Automatic segmentation of cervical nuclei based on deep learning and a conditional random field. IEEE Access, 2018, 6:53709-53721. DOI:10.1109/ACCESS.2018.2871153.
[18] Wang P, Wang L, Li Y, Song Q, Lv S, Hu X. Automatic cell nuclei segmentation and classification of cervical Pap smear images. Biomedical Signal Processing and Control, 2019, 48:93-103. DOI:10.1016/j.bspc.2018.09.008.
[19] Gupta R, Sarwar A, Sharma V. Screening of cervical cancer by artificial intelligence based analysis of digitized Papanicolaou-smear images. International Journal of Contemporary Medical Research, 2017, 4(5):1108-1113. DOI:10.21276/ijcmr.
[20] Wu W, Zhou H. Data-driven diagnosis of cervical cancer with support vector machine-based approaches. IEEE Access, 2017, 5:25189-25195. DOI:10.1109/ACCESS.2017.2763984.
[21] Zhang L, Lu L, Nogues I, Summers R M, Liu S, Yao J. DeepPap:Deep convolutional networks for cervical cell classification. IEEE Journal of Biomedical and Health Informatics, 2017, 21(6):1633-1643. DOI:10.1109/JBHI.2017.2705583.
[22] Nayar R, Wilbur D C. The Bethesda System for Reporting Cervical Cytology:Definitions, Criteria, and Explanatory Notes (3rd edition). Springer, 2015.
[23] Bay H, Tuytelaars T, Gool L V. SURF:Speeded up robust features. In Proc. the 9th European Conference on Computer Vision, May 2006, pp.404-417. DOI:10.1007/1174402332.
[24] Otsu N. A threshold selection method from gray-level histograms. IEEE Transactions on Systems, 1979, 9(1):62-66. DOI:10.1109/TSMC.1979.4310076.
[25] Soille P. Morphological Image Analysis:Principles and Applications (2nd edition). Springer-Verlag Berlin Heidelberg Publisher, 2003. DOI:10.1007/978-3-662-05088-0.
[26] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014. https://arxiv.org/pdf/1409.1556.pdf, Mar. 2020.
[27] Wang J, Perez L. The effectiveness of data augmentation in image classification using deep learning. arXiv:1712.04621, 2017. https://arxiv.org/pdf/1712.04621.pdf, Mar. 2020.
[28] Son J, Shin J Y, Kim H D, Jung K H, Park K H, Park S J. Development and validation of deep learning models for screening multiple abnormal findings in retinal fundus images. Ophthalmology, 2020, 127(1):85-94. DOI:10.1016/j.ophtha.2019.05.029.
[29] Chalakkal R J, Abdulla W H, Thulaseedharan S S. Quality and content analysis of fundus images using deep learning. Computers in Biology and Medicine, 2019, 108:317-331. DOI:10.1016/j.compbiomed.2019.03.019.
[30] Ba J L, Kiros J R, Hinton G E. Layer normalization. arXiv:1607.06450, 2016. https://arxiv.org/pdf/160-7.06450.pdf, Mar. 2020.
[31] Kohavi R. A study of cross-validation and bootstrap for accuracy estimation and model selection. In Proc. the 14th International Joint Conference on Artificial Intelligence, Aug. 1995, pp.1137-1143.
[32] Refaeilzadeh P, Tang L, Liu H. Cross-validation. In Encyclopedia of Database Systems, Liu L, Özsu M T (eds.), Springer, 2016, pp.532-538. DOI:10.1007/978-0-387-39940-9565.
[33] Zeiler M D. ADADELTA:An adaptive learning rate method. arXiv:1212.5701, 2012. https://arxiv.org/pdf/1212.5701.pdf, Mar. 2020.
[34] Xiang S, Liang Q, Hu Y, Tang P, Coppola G, Zhang D, Sun W. AMC-Net:Asymmetric and multi-scale convolutional neural network for multi-label HPA classification. Computer Methods and Programs in Biomedicine, 2019, 178:275-287. DOI:10.1016/j.cmpb.2019.07.009.
[35] Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout:A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 2014, 15:1929-1958.
[36] Nielsen M. Improving the way neural networks learn. http://neuralnetworksanddeeplearning.com/chap3.html, Mar. 2020.
[37] Wang Y, Shen X, Yang Y. The classification of Chinese sensitive information based on BERT-CNN. In Artificial Intelligence Algorithms and Applications, Li K, Li W, Wang H, Liu Y (eds.), Springer, 2020, pp.269-280. DOI:10.1007/978-981-15-5577-020.
[38] Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. arXiv:1512.00567, 2015. https://arxiv.org/pdf/15-12.00567.pdf, Mar. 2020.
[39] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In Proc. the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2016, pp.770-778. DOI:10.1109/CVPR.2016.90.
[40] Huang G, Liu Z, Maaten L V D, Weinberger K Q. Densely connected convolutional networks. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Jul. 2017, pp.2261-2269. DOI:10.1109/CVPR.2017.243.
[41] Paul P R, Bhowmik M K, Bhattacharjee D. Automated cervical cancer detection using Pap smear images. In Proc. the 4th International Conference on Soft Computing for Problem Solving, Dec. 2014, pp.267-278. DOI:10.1007/978-81-322-2217-023.
[42] Plissiti M E, Dimitrakopoulos P, Sfikas G, Nikou C, Krikoni O, Charchanti A. SIPaKMeD:A new dataset for feature and image based classification of normal and pathological cervical cells in Pap smear images. In Proc. the 25th IEEE International Conference on Image Processing, Oct. 2018, pp.3144-3148. DOI:10.1109/ICIP.2018.8451588.
[1] 张鑫, 陆思源, 王水花, 余翔, 王甦菁, 姚仑, 潘毅, 张煜东. 通过新型深度学习架构诊断COVID-19肺炎[J]. 计算机科学技术学报, 2022, 37(2): 330-343.
[2] Yi Zhong, Jian-Hua Feng, Xiao-Xin Cui, Xiao-Le Cui. 机器学习辅助的抗逻辑块加密密钥猜测攻击范式[J]. 计算机科学技术学报, 2021, 36(5): 1102-1117.
[3] Songjie Niu, Shimin Chen. TransGPerf:利用迁移学习建模分布式图计算性能[J]. 计算机科学技术学报, 2021, 36(4): 778-791.
[4] Zhi-Xin Qi, Hong-Zhi Wang, An-Jie Wang. 针对劣质数据对分类和聚类模型影响的实验评估[J]. 计算机科学技术学报, 2021, 36(4): 806-821.
[5] Jing-Xuan Zhang, Chuan-Qi Tao, Zhi-Qiu Huang, Xin Chen. 使用文本分类检测API规范中的API指令[J]. 计算机科学技术学报, 2021, 36(4): 922-943.
[6] Lan Chen, Juntao Ye, Xiaopeng Zhang. 基于多特征超分网络的布料褶皱合成[J]. 计算机科学技术学报, 2021, 36(3): 478-493.
[7] Yu-Jie Yuan, Yukun Lai, Tong Wu, Lin Gao, Li-Gang Liu. 回顾形状编辑技术:从几何角度到神经网络方法[J]. 计算机科学技术学报, 2021, 36(3): 520-554.
[8] Sheng-Luan Hou, Xi-Kun Huang, Chao-Qun Fei, Shu-Han Zhang, Yang-Yang Li, Qi-Lin Sun, Chuan-Qing Wang. 基于深度学习的文本摘要研究综述[J]. 计算机科学技术学报, 2021, 36(3): 633-663.
[9] Wei Du, Yu Sun, Hui-Min Bao, Liang Chen, Ying Li, Yan-Chun Liang. 基于迁移学习与深度学习的人类血液分泌蛋白预测框架[J]. 计算机科学技术学报, 2021, 36(2): 234-247.
[10] Jun Gao, Paul Liu, Guang-Di Liu, Le Zhang. 基于深度学习与波束偏转的穿刺针定位与增强算法[J]. 计算机科学技术学报, 2021, 36(2): 334-346.
[11] Xia-An Bi, Zhao-Xu Xing, Rui-Hui Xu, Xi Hu. 基于影像遗传学数据的发现帕金森症的危险基因和异常脑区的有效WRF框架[J]. 计算机科学技术学报, 2021, 36(2): 361-374.
[12] Bo-Wei Zou, Rong-Tao Huang, Zeng-Zhuang Xu, Yu Hong, Guo-Dong Zhou. 基于对抗神经网络的跨语言实体关系分类[J]. 计算机科学技术学报, 2021, 36(1): 207-220.
[13] Andrea Caroppo, Alessandro Leone, Pietro Siciliano. 用于老年人面部表情识别的深度学习模型和传统机器学习方法的对比研究[J]. 计算机科学技术学报, 2020, 35(5): 1127-1146.
[14] Punit Kumar, Atul Gupta. 用于分类、回归和聚集的主动学习查询策略:综述[J]. 计算机科学技术学报, 2020, 35(4): 913-945.
[15] Zheng Zeng, Lu Wang, Bei-Bei Wang, Chun-Meng Kang, Yan-Ning Xu. 一种基于多重残差网络的随机渐进式光子映射的降噪方法[J]. 计算机科学技术学报, 2020, 35(3): 506-521.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
[1] 周笛;. A Recovery Technique for Distributed Communicating Process Systems[J]. , 1986, 1(2): 34 -43 .
[2] 李未;. A Structural Operational Semantics for an Edison Like Language(2)[J]. , 1986, 1(2): 42 -53 .
[3] 陈世华;. On the Structure of Finite Automata of Which M Is an(Weak)Inverse with Delay τ[J]. , 1986, 1(2): 54 -59 .
[4] 孙钟秀; 商陆军;. DMODULA:A Distributed Programming Language[J]. , 1986, 1(2): 25 -31 .
[5] 王建潮; 魏道政;. An Effective Test Generation Algorithm for Combinational Circuits[J]. , 1986, 1(4): 1 -16 .
[6] 陈肇雄; 高庆狮;. A Substitution Based Model for the Implementation of PROLOG——The Design and Implementation of LPROLOG[J]. , 1986, 1(4): 17 -26 .
[7] 黄河燕;. A Parallel Implementation Model of HPARLOG[J]. , 1986, 1(4): 27 -38 .
[8] 郑国梁; 李辉;. The Design and Implementation of the Syntax-Directed Editor Generator(SEG)[J]. , 1986, 1(4): 39 -48 .
[9] 黄学东; 蔡莲红; 方棣棠; 迟边进; 周立; 蒋力;. A Computer System for Chinese Character Speech Input[J]. , 1986, 1(4): 75 -83 .
[10] 许小曙;. Simplification of Multivalued Sequential SULM Network by Using Cascade Decomposition[J]. , 1986, 1(4): 84 -95 .
版权所有 © 《计算机科学技术学报》编辑部
本系统由北京玛格泰克科技发展有限公司设计开发 技术支持:support@magtech.com.cn
总访问量: