论文标题

基于内存的多模式深度学习的融合

Memory based fusion for multi-modal deep learning

论文作者

Priyasad, Darshana, Fernando, Tharindu, Denman, Simon, Sridharan, Sridha, Fookes, Clinton

论文摘要

与单模式方法相比,多模式的使用多模式数据在深度机器学习中表现出了希望,并融合了多模式特征,从而改善了多种应用程序的性能。但是,大多数最先进的方法都使用幼稚的融合,该融合过程以独立的方式进行流,忽略了融合过程中数据中可能的长期依赖性。在本文中,我们提出了一个基于内存的细心融合层,该层通过将当前特征和长期依赖性纳入数据,从而融合了模式,从而使模型可以理解模式随时间的相对重要性。我们在融合层中引入了一个明确的内存块,该内存块存储了包含融合数据的长期依赖性的功能。来自Uni-Modal编码器的功能输入通过细心的组成和转换融合,然后是幼稚地融合所得的内存派生的特征,并带有图层输入。遵循最新方法,我们评估了在两个不同模态的两个不同数据集上提出的融合方法的性能和概括性。在我们的实验中,我们用建议的层代替了基准网络中的天真融合层,以实现公平的比较。实验结果表明,MBAF层可以在不同的方式和网络上概括以增强融合和提高性能。

The use of multi-modal data for deep machine learning has shown promise when compared to uni-modal approaches with fusion of multi-modal features resulting in improved performance in several applications. However, most state-of-the-art methods use naive fusion which processes feature streams independently, ignoring possible long-term dependencies within the data during fusion. In this paper, we present a novel Memory based Attentive Fusion layer, which fuses modes by incorporating both the current features and longterm dependencies in the data, thus allowing the model to understand the relative importance of modes over time. We introduce an explicit memory block within the fusion layer which stores features containing long-term dependencies of the fused data. The feature inputs from uni-modal encoders are fused through attentive composition and transformation followed by naive fusion of the resultant memory derived features with layer inputs. Following state-of-the-art methods, we have evaluated the performance and the generalizability of the proposed fusion approach on two different datasets with different modalities. In our experiments, we replace the naive fusion layer in benchmark networks with our proposed layer to enable a fair comparison. Experimental results indicate that the MBAF layer can generalise across different modalities and networks to enhance fusion and improve performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源