论文标题
使用LSSVM基础学习者和跨传输模块的学习数量很少
Few-shot Learning with LSSVM Base Learner and Transductive Modules
论文作者
论文摘要
几次学习的元学习方法的性能通常取决于三个方面:适合比较的功能,分类器(基本学习者)适合低数据表情况,以及样本中的宝贵信息以分类。在这项工作中,我们对最后两个方面进行了改进:1)尽管有许多有效的基础学习者,但在概括性能和计算间接费用之间存在权衡,因此我们引入了多级最小二乘支持向量机器,作为我们的基础学习者,它比现有的生成更好的是,比计算较少的开销的生成更好。 2)此外,为了利用查询样品中的信息,我们提出了两个简单有效的跨导式模块,这些模块使用查询样品修改了支持集,即,即根据注意机制调整支持样本,并添加带有PSEUDE LABEL的原型的支持集,以支持支持集作为PSEUDE支持样品。这两个模块显着提高了少数射击分类的精度,尤其是对于困难的1-shot设置。我们的模型被称为FSLSTM(使用LSSVM基础学习者和托管模块进行了少量学习),可以在微型Imimagenet和Cifar-fs上实现最先进的性能。
The performance of meta-learning approaches for few-shot learning generally depends on three aspects: features suitable for comparison, the classifier ( base learner ) suitable for low-data scenarios, and valuable information from the samples to classify. In this work, we make improvements for the last two aspects: 1) although there are many effective base learners, there is a trade-off between generalization performance and computational overhead, so we introduce multi-class least squares support vector machine as our base learner which obtains better generation than existing ones with less computational overhead; 2) further, in order to utilize the information from the query samples, we propose two simple and effective transductive modules which modify the support set using the query samples, i.e., adjusting the support samples basing on the attention mechanism and adding the prototypes of the query set with pseudo labels to the support set as the pseudo support samples. These two modules significantly improve the few-shot classification accuracy, especially for the difficult 1-shot setting. Our model, denoted as FSLSTM (Few-Shot learning with LSsvm base learner and Transductive Modules), achieves state-of-the-art performance on miniImageNet and CIFAR-FS few-shot learning benchmarks.