论文标题
学习以面部动作单元识别的多维边缘功能基于功能的AU关系图
Learning Multi-dimensional Edge Feature-based AU Relation Graph for Facial Action Unit Recognition
论文作者
论文摘要
面部动作单位(AUS)的激活相互影响。尽管一对AU之间的关系是复杂且独特的,但现有方法无法具体而明确地代表每个面部显示中每对AU的提示。本文提出了一种AU关系建模方法,该方法深入了解独特的图表,以明确描述目标面部显示的每对AU之间的关系。我们的方法首先将每个AU的激活状态及其与其他AU的关联编码为节点功能。然后,它学习了一对多维边缘功能,以描述每对AUS之间的多个特定于任务的关系线索。在节点和边缘功能学习期间,我们的方法还考虑了独特的面部展示对AUS关系的影响,通过将完整的面部表示作为输入。 BP4D和DISFA数据集的实验结果表明,节点和边缘特征学习模块都为CNN和基于变压器的骨架提供了巨大的性能改进,我们的最佳系统可实现最新的AU识别结果。我们的方法不仅具有强大的AU识别建模关系线索的能力,而且很容易被纳入各种骨架中。我们的Pytorch代码可用。
The activations of Facial Action Units (AUs) mutually influence one another. While the relationship between a pair of AUs can be complex and unique, existing approaches fail to specifically and explicitly represent such cues for each pair of AUs in each facial display. This paper proposes an AU relationship modelling approach that deep learns a unique graph to explicitly describe the relationship between each pair of AUs of the target facial display. Our approach first encodes each AU's activation status and its association with other AUs into a node feature. Then, it learns a pair of multi-dimensional edge features to describe multiple task-specific relationship cues between each pair of AUs. During both node and edge feature learning, our approach also considers the influence of the unique facial display on AUs' relationship by taking the full face representation as an input. Experimental results on BP4D and DISFA datasets show that both node and edge feature learning modules provide large performance improvements for CNN and transformer-based backbones, with our best systems achieving the state-of-the-art AU recognition results. Our approach not only has a strong capability in modelling relationship cues for AU recognition but also can be easily incorporated into various backbones. Our PyTorch code is made available.