论文标题

CTRL:标签错误检测的聚类训练损失

CTRL: Clustering Training Losses for Label Error Detection

论文作者

Yue, Chang, Jha, Niraj K.

论文摘要

在监督的机器学习中,使用正确标签对于确保高准确性非常重要。不幸的是,大多数数据集都包含损坏的标签。在此类数据集上训练的机器学习模型不能很好地概括。因此,检测其标签错误可以显着提高其疗效。我们提出了一个称为CTRL的新型框架(标签错误检测的聚类训练损失),以检测多级数据集中的标签错误。它基于模型以不同方式学习干净和嘈杂的标签的观察结果,以两个步骤检测标签错误。首先,我们使用嘈杂的训练数据集训练神经网络,并为每个样本获得损失曲线。然后,我们将聚类算法应用于训练损失,将样本分为两类:已标记和噪声标记。标签误差检测后,我们删除带有嘈杂标签的样品并重新训练模型。我们的实验结果表明,在模拟噪声下,图像(CIFAR-10和CIFAR-100和CIFAR-100)和表格数据集上的最新误差检测准确性。我们还使用理论分析来提供有关CTRL表现如此出色的见解。

In supervised machine learning, use of correct labels is extremely important to ensure high accuracy. Unfortunately, most datasets contain corrupted labels. Machine learning models trained on such datasets do not generalize well. Thus, detecting their label errors can significantly increase their efficacy. We propose a novel framework, called CTRL (Clustering TRaining Losses for label error detection), to detect label errors in multi-class datasets. It detects label errors in two steps based on the observation that models learn clean and noisy labels in different ways. First, we train a neural network using the noisy training dataset and obtain the loss curve for each sample. Then, we apply clustering algorithms to the training losses to group samples into two categories: cleanly-labeled and noisily-labeled. After label error detection, we remove samples with noisy labels and retrain the model. Our experimental results demonstrate state-of-the-art error detection accuracy on both image (CIFAR-10 and CIFAR-100) and tabular datasets under simulated noise. We also use a theoretical analysis to provide insights into why CTRL performs so well.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源