论文标题

多任务学习是否可以持续学习?

Is Multi-Task Learning an Upper Bound for Continual Learning?

论文作者

Wu, Zihao, Tran, Huy, Pirsiavash, Hamed, Kolouri, Soheil

论文摘要

持续和多任务学习是从多个任务中学习的常见机器学习方法。文献中的现有作品通常将多任务学习是一种明智的表现上的上限,用于各种持续学习算法。尽管对于不同的持续学习基准,该假设得到了经验验证,但并非严格地证明这一点。此外,可以想象的是,当从多个任务中学习时,这些任务的一小部分可能会作为对抗性任务降低多任务设置中的整体学习绩效的行为。相比之下,持续学习方法可以避免由于这种对抗性任务而导致的表现下降,以保留其在其余任务上的表现,从而使性能更好,而不是多任务学习者。本文提出了一个新颖的持续自我监督的学习环境,每个任务都对应于为特定数据增强的不变性表示。在这种情况下,我们表明,持续学习通常会在包括MNIST,CIFAR-10和CIFAR-100在内的各种基准数据集上击败多任务学习。

Continual and multi-task learning are common machine learning approaches to learning from multiple tasks. The existing works in the literature often assume multi-task learning as a sensible performance upper bound for various continual learning algorithms. While this assumption is empirically verified for different continual learning benchmarks, it is not rigorously justified. Moreover, it is imaginable that when learning from multiple tasks, a small subset of these tasks could behave as adversarial tasks reducing the overall learning performance in a multi-task setting. In contrast, continual learning approaches can avoid the performance drop caused by such adversarial tasks to preserve their performance on the rest of the tasks, leading to better performance than a multi-task learner. This paper proposes a novel continual self-supervised learning setting, where each task corresponds to learning an invariant representation for a specific class of data augmentations. In this setting, we show that continual learning often beats multi-task learning on various benchmark datasets, including MNIST, CIFAR-10, and CIFAR-100.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源