论文标题

Au监督的卷积视觉变压器用于合成面部表达识别

AU-Supervised Convolutional Vision Transformers for Synthetic Facial Expression Recognition

论文作者

Mao, Shuyi, Li, Xinpeng, Chen, Junyao, Peng, Xiaojiang

论文摘要

本文描述了我们提出的关于情感行为分析(ABAW)竞争的六个基本表达分类轨道2022的方法。由于合成数据的模棱两可和面部动作单元(AU)的客观性,我们求助于AU信息以提高性能,并做出如下贡献。首先,为了使模型适应综合场景,我们使用了预先训练的大规模面部识别数据中的知识。其次,我们提出了一个概念新的框架,称为Au监督卷积视觉变压器(AU-CVT),该框架通过与AU或Pseudo Au标签共同培训辅助数据集来显然改善了FER的性能。我们的AU-CVT的F1得分为$ 0.6863 $,准确性为$ 0.7433 $,在验证集中。我们工作的源代码在线公开可用:https://github.com/msy1412/abaw4

The paper describes our proposed methodology for the six basic expression classification track of Affective Behavior Analysis in-the-wild (ABAW) Competition 2022. In Learing from Synthetic Data(LSD) task, facial expression recognition (FER) methods aim to learn the representation of expression from the artificially generated data and generalise to real data. Because of the ambiguous of the synthetic data and the objectivity of the facial Action Unit (AU), we resort to the AU information for performance boosting, and make contributions as follows. First, to adapt the model to synthetic scenarios, we use the knowledge from pre-trained large-scale face recognition data. Second, we propose a conceptually-new framework, termed as AU-Supervised Convolutional Vision Transformers (AU-CVT), which clearly improves the performance of FER by jointly training auxiliary datasets with AU or pseudo AU labels. Our AU-CVT achieved F1 score as $0.6863$, accuracy as $0.7433$ on the validation set. The source code of our work is publicly available online: https://github.com/msy1412/ABAW4

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源