论文标题
TSF:基于变压器的语义滤波器,用于几次学习
tSF: Transformer-based Semantic Filter for Few-Shot Learning
论文作者
论文摘要
很少有射击学习(FSL)通过将歧视性的目标感知特征嵌入到可见的(基础)和几乎看不见的(新颖)标记的样本中,从而减轻了数据短缺挑战。最近的FSL方法中的大多数功能嵌入模块是专门设计用于相应的学习任务(例如,分类,细分和对象检测)的,这限制了嵌入功能的实用性。为此,我们提出了一个名为“变压器”语义滤波器(TSF)的光和通用模块,该模块可用于不同的FSL任务。提出的TSF通过语义过滤器重新设计了基于变压器的结构的输入,该滤波器不仅嵌入了从整个基础设置到新颖集合的知识,而且还嵌入了目标类别的语义特征。此外,TSF的参数等于标准变压器块的一半(小于1m)。在实验中,我们的TSF能够在不同的经典少数学习任务(大约提高2%)中提高性能,尤其是在几个基准分类任务中胜过多个基准数据集中最先进的表现。
Few-Shot Learning (FSL) alleviates the data shortage challenge via embedding discriminative target-aware features among plenty seen (base) and few unseen (novel) labeled samples. Most feature embedding modules in recent FSL methods are specially designed for corresponding learning tasks (e.g., classification, segmentation, and object detection), which limits the utility of embedding features. To this end, we propose a light and universal module named transformer-based Semantic Filter (tSF), which can be applied for different FSL tasks. The proposed tSF redesigns the inputs of a transformer-based structure by a semantic filter, which not only embeds the knowledge from whole base set to novel set but also filters semantic features for target category. Furthermore, the parameters of tSF is equal to half of a standard transformer block (less than 1M). In the experiments, our tSF is able to boost the performances in different classic few-shot learning tasks (about 2% improvement), especially outperforms the state-of-the-arts on multiple benchmark datasets in few-shot classification task.