论文标题
长尾实例细分的Seesaw损失
Seesaw Loss for Long-Tailed Instance Segmentation
论文作者
论文摘要
实例细分已经在类均衡的基准测试方面取得了显着的进展。但是,它们在实际情况下无法准确地执行,其中对象的类别分布自然带有长尾巴。头等阶层的实例主导着一个长尾数据集,它们是尾巴类别的负样本。尾部类别的负面样本的压倒性梯度导致分类器的学习过程有偏见。因此,尾巴类别的对象更有可能被错误分类为背景类别。为了解决这个问题,我们建议每个类别的阳性和负样本的动态重新平衡梯度,并具有两个互补因子,即缓解因素和补偿因子。缓解因素减少对W.R.T.尾巴类别的惩罚不同类别之间的累积培训实例的比率。同时,薪酬因素增加了错误分类实例的惩罚,以避免对尾巴类别的假阳性。我们通过主流框架和不同的数据采样策略进行了有关Seesaw损失的广泛实验。借助简单的端到端训练管道,Seesaw损失获得了跨透明度损失的显着增长,并在没有铃铛和哨声的LVIS数据集上实现了最先进的性能。代码可在https://github.com/open-mmlab/mmdetection上找到。
Instance segmentation has witnessed a remarkable progress on class-balanced benchmarks. However, they fail to perform as accurately in real-world scenarios, where the category distribution of objects naturally comes with a long tail. Instances of head classes dominate a long-tailed dataset and they serve as negative samples of tail categories. The overwhelming gradients of negative samples on tail classes lead to a biased learning process for classifiers. Consequently, objects of tail categories are more likely to be misclassified as backgrounds or head categories. To tackle this problem, we propose Seesaw Loss to dynamically re-balance gradients of positive and negative samples for each category, with two complementary factors, i.e., mitigation factor and compensation factor. The mitigation factor reduces punishments to tail categories w.r.t. the ratio of cumulative training instances between different categories. Meanwhile, the compensation factor increases the penalty of misclassified instances to avoid false positives of tail categories. We conduct extensive experiments on Seesaw Loss with mainstream frameworks and different data sampling strategies. With a simple end-to-end training pipeline, Seesaw Loss obtains significant gains over Cross-Entropy Loss, and achieves state-of-the-art performance on LVIS dataset without bells and whistles. Code is available at https://github.com/open-mmlab/mmdetection.