论文标题
半监督腺体分割的成对关系学习
Pairwise Relation Learning for Semi-supervised Gland Segmentation
论文作者
论文摘要
在组织学组织图像上的准确和自动化的腺体分割是计算机辅助诊断腺癌的必不可少的任务。尽管盛行,深度学习模型总是需要大量的注释训练图像,这是由于与组织学图像注释相关的广泛劳动和相关的专家成本而难以获得的。在本文中,我们提出了基于成对关系的半监督(PRS^2)模型,用于在组织学图像上进行分割。该模型由分割网络(S-NET)和成对关系网络(PR-NET)组成。对S-NET进行了标记的数据进行分割训练,并以无监督的方式对标记和未标记数据进行了PR-NET训练,以通过利用特征空间中每对图像之间的语义一致性来增强其图像表示能力。由于两个网络共享其编码器,因此可以将PR-NET学到的图像表示能力转移到S-NET以提高其细分性能。我们还设计了对象级骰子损失,以解决触摸腺体引起的问题,并将其与S-NET的其他两个损失功能相结合。我们根据GLAS数据集上的五种最新方法和CRAG数据集上的三种方法评估了模型。我们的结果不仅证明了提出的PR-NET和对象级骰子损失的有效性,而且还表明我们的PRS^2模型在两个基准上都实现了最新的腺体细分性能。
Accurate and automated gland segmentation on histology tissue images is an essential but challenging task in the computer-aided diagnosis of adenocarcinoma. Despite their prevalence, deep learning models always require a myriad number of densely annotated training images, which are difficult to obtain due to extensive labor and associated expert costs related to histology image annotations. In this paper, we propose the pairwise relation-based semi-supervised (PRS^2) model for gland segmentation on histology images. This model consists of a segmentation network (S-Net) and a pairwise relation network (PR-Net). The S-Net is trained on labeled data for segmentation, and PR-Net is trained on both labeled and unlabeled data in an unsupervised way to enhance its image representation ability via exploiting the semantic consistency between each pair of images in the feature space. Since both networks share their encoders, the image representation ability learned by PR-Net can be transferred to S-Net to improve its segmentation performance. We also design the object-level Dice loss to address the issues caused by touching glands and combine it with other two loss functions for S-Net. We evaluated our model against five recent methods on the GlaS dataset and three recent methods on the CRAG dataset. Our results not only demonstrate the effectiveness of the proposed PR-Net and object-level Dice loss, but also indicate that our PRS^2 model achieves the state-of-the-art gland segmentation performance on both benchmarks.