We use cookies to improve your experience with our site.

一种基于预激活卷积残差和三重注意力的视网膜血管分割网络

PCRTAM-Net: A Novel Pre-Activated Convolution Residual and Triple Attention Mechanism Network for Retinal Vessel Segmentation

  • 摘要:
    研究背景 人类眼底视网膜眼底图像中的血管形态特征可用于诊断眼科疾病和部分心脑血管疾病。基于人工智能方法的眼底图像血管分割方法,可以较好地实现血管的自动化分割,辅助医生临床诊断,减轻医生工作负担。然而,由于视网膜血管结构较为复杂,且微细血管在分割中容易断裂,分割方法的性能和泛化能力均有待提升。
    目的 本篇论文的研究目标是提出一种新的视网膜血管分割模型,克服视网膜血管图像的分割难点,分割出更多连续的微细血管并进一步提升视网膜血管的分割性能和泛化能力,所提出的端到端的自动化分割方法具有很好的研究意义和临床应用价值。
    方法 本文基于所提出的预激活Dropout卷积残差方法(Res-PDC)、残差空洞卷积空间金字塔方法(Res-ACSP)以及三重注意力机制(TAM),提出了一个新的视网膜血管分割模型PCRTAM-Net,在4个眼底数据集上进行的性能测试、消融实验以及泛化能力实验,验证了本文所提出的PCRTAM-Net的有效性。
    结果 本文所提出的PCRTAM-Net模型显具有领先的性能,在四个公开数据集上的准确率都达到97%以上,在DRIVE和STARE数据集的交叉验证中也取得了良好的性能验证结果,准确率都高于96.7%,证明了该方法的鲁棒性。
    结论 本文基于预激活卷积残差和三重注意力机制,提出了一个视网膜血管分割模型PCRTAM-Net。该模型能够在视网膜血管图像中分割出更多连续的微细血管。PCRTAM-Net有较好的性能和鲁棒性,未来研究中可进一步开展模型的轻量化等方面的研究,以使其能应用于临床实践中。

     

    Abstract: Retinal images play an essential role in the early diagnosis of ophthalmic diseases. Automatic segmentation of retinal vessels in color fundus images is challenging due to the morphological differences between the retinal vessels and the low-contrast background. At the same time, automated models struggle to capture representative and discriminative retinal vascular features. To fully utilize the structural information of the retinal blood vessels, we propose a novel deep learning network called Pre-Activated Convolution Residual and Triple Attention Mechanism Network (PCRTAM-Net). PCRTAM-Net uses the pre-activated dropout convolution residual method to improve the feature learning ability of the network. In addition, the residual atrous convolution spatial pyramid is integrated into both ends of the network encoder to extract multiscale information and improve blood vessel information flow. A triple attention mechanism is proposed to extract the structural information between vessel contexts and to learn long-range feature dependencies. We evaluate the proposed PCRTAM-Net on four publicly available datasets, DRIVE, CHASE_DB1, STARE, and HRF. Our model achieves state-of-the-art performance of 97.10%, 97.70%, 97.68%, and 97.14% for ACC and 83.05%, 82.26%, 84.64%, and 81.16% for F1, respectively.

     

/

返回文章
返回