论文标题
半监督语义分割,并相互知识蒸馏
Semi-supervised Semantic Segmentation with Mutual Knowledge Distillation
论文作者
论文摘要
一致性正规化已在最近的半监视语义分割方法中广泛研究,并实现了有希望的性能。在这项工作中,我们提出了一个新的一致性正则化框架,称为相互知识蒸馏(MKD),结合了数据和功能增强。我们基于一致性正规化介绍了两个辅助均值老师模型。更具体地说,我们使用卑鄙的老师生成的伪标签来监督学生网络以在两个分支之间进行相互知识蒸馏。除了使用图像级强和弱的增强外,我们还讨论了功能增强。这涉及考虑各种知识来源以提炼学生网络。因此,我们可以显着增加训练样本的多样性。公共基准测试的实验表明,我们的框架在各种半监督设置下都优于先前的最新方法(SOTA)方法。代码可在Semi-MmSeg上找到。
Consistency regularization has been widely studied in recent semisupervised semantic segmentation methods, and promising performance has been achieved. In this work, we propose a new consistency regularization framework, termed mutual knowledge distillation (MKD), combined with data and feature augmentation. We introduce two auxiliary mean-teacher models based on consistency regularization. More specifically, we use the pseudo-labels generated by a mean teacher to supervise the student network to achieve a mutual knowledge distillation between the two branches. In addition to using image-level strong and weak augmentation, we also discuss feature augmentation. This involves considering various sources of knowledge to distill the student network. Thus, we can significantly increase the diversity of the training samples. Experiments on public benchmarks show that our framework outperforms previous state-of-the-art (SOTA) methods under various semi-supervised settings. Code is available at semi-mmseg.