论文标题

深层多任务增强功能通过层次图神经网络学习

Deep Multi-Task Augmented Feature Learning via Hierarchical Graph Neural Network

论文作者

Guo, Pengxin, Deng, Chang, Xu, Linjie, Huang, Xiaonan, Zhang, Yu

论文摘要

近年来,深度多任务学习在许多应用中取得了良好的性能,引起了很多关注。功能学习对于深度多任务学习至关重要,以在任务之间共享共同的信息。在本文中,我们提出了一个分层图神经网络(HGNN),以学习深度多任务学习的增强功能。 HGNN由两级图神经网络组成。在低级别,任务内图神经网络负责通过汇总其邻居来学习任务中每个数据点的强大表示形式。基于学习的表示形式,可以以类似的方式为每个任务生成一个任务嵌入。在第二层中,任务间的图形神经网络根据注意机制对所有任务的任务嵌入到建模任务关系。然后,使用一个任务的任务嵌入来增强此任务中数据点的功能表示。此外,对于分类任务,引入了类间图神经网络,以对较细的粒度进行类似的操作,即类级,以生成所有任务中每个类别的类嵌入,使用类嵌入来增强特征表示。提出的功能增强策略可用于许多深层多任务学习模型。我们分析了HGNN的训练和概括损失。现实数据数据存储的实验显示使用此策略时的性能改善。

Deep multi-task learning attracts much attention in recent years as it achieves good performance in many applications. Feature learning is important to deep multi-task learning for sharing common information among tasks. In this paper, we propose a Hierarchical Graph Neural Network (HGNN) to learn augmented features for deep multi-task learning. The HGNN consists of two-level graph neural networks. In the low level, an intra-task graph neural network is responsible of learning a powerful representation for each data point in a task by aggregating its neighbors. Based on the learned representation, a task embedding can be generated for each task in a similar way to max pooling. In the second level, an inter-task graph neural network updates task embeddings of all the tasks based on the attention mechanism to model task relations. Then the task embedding of one task is used to augment the feature representation of data points in this task. Moreover, for classification tasks, an inter-class graph neural network is introduced to conduct similar operations on a finer granularity, i.e., the class level, to generate class embeddings for each class in all the tasks use class embeddings to augment the feature representation. The proposed feature augmentation strategy can be used in many deep multi-task learning models. we analyze the HGNN in terms of training and generalization losses. Experiments on real-world datastes show the significant performance improvement when using this strategy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源