论文标题
初始分类器权重重播,用于无内存类增量学习
Initial Classifier Weights Replay for Memoryless Class Incremental Learning
论文作者
论文摘要
当人造系统需要处理数据流并且无需始终访问所有数据时,增量学习(IL)很有用。最具挑战性的设置需要深层模型的恒定复杂性和增量模型更新,而无需访问过去数据的有界内存。然后,过去班级的表现受到灾难性遗忘的强烈影响。为了减轻其负面影响,通常会部署包括知识蒸馏的改编微调。我们提出了一种基于香草微调主链的不同方法。它利用初始分类器权重,可以很好地表示过去的类别,因为它们接受了所有类数据的培训。但是,在不同状态下,学到的分类器的大小都会有所不同,并且需要进行公平处理所有类别的规范化。通过标准化初始分类器权重来执行归一化,该权重假定为正态分布。另外,通过使用状态级统计数据进一步提高分类公平性来完成预测分数的校准。我们在无内存的增量学习环境中对四个公共数据集进行了彻底的评估。结果表明,对于大型数据集,我们的方法比现有的技术优于现有技术。
Incremental Learning (IL) is useful when artificial systems need to deal with streams of data and do not have access to all data at all times. The most challenging setting requires a constant complexity of the deep model and an incremental model update without access to a bounded memory of past data. Then, the representations of past classes are strongly affected by catastrophic forgetting. To mitigate its negative effect, an adapted fine tuning which includes knowledge distillation is usually deployed. We propose a different approach based on a vanilla fine tuning backbone. It leverages initial classifier weights which provide a strong representation of past classes because they are trained with all class data. However, the magnitude of classifiers learned in different states varies and normalization is needed for a fair handling of all classes. Normalization is performed by standardizing the initial classifier weights, which are assumed to be normally distributed. In addition, a calibration of prediction scores is done by using state level statistics to further improve classification fairness. We conduct a thorough evaluation with four public datasets in a memoryless incremental learning setting. Results show that our method outperforms existing techniques by a large margin for large-scale datasets.