论文标题

图像到视频通过相互判别知识转移重新识别

Image-to-Video Re-Identification via Mutual Discriminative Knowledge Transfer

论文作者

Wang, Pichao, Wang, Fan, Li, Hao

论文摘要

图像和视频之间表示形式的差距使图像到视频重新识别(I2V RE-ID)具有挑战性,而最近的作品将此问题形成为知识蒸馏(KD)过程。在本文中,我们提出了一个相互判别的知识蒸馏框架,以更有效地将基于视频的富人表示转移到基于图像的表示。具体而言,我们提出了三胞胎对比损失(TCL),这是为KD设计的新型损失。在KD过程中,TCL损失会转移本地结构,利用高阶信息,并减轻教师和学生网络异质产出的不对准。与KD的其他损失相比,拟议的TCL损失选择性地将当地歧视性特征从教师转移到学生,从而使其在REID中有效。除了TCL损失外,我们还采用相互学习来同时正规化教师和学生网络培训。广泛的实验证明了我们方法对火星,DUKEMTMC-VEDEOREID和VERI-776基准的有效性。

The gap in representations between image and video makes Image-to-Video Re-identification (I2V Re-ID) challenging, and recent works formulate this problem as a knowledge distillation (KD) process. In this paper, we propose a mutual discriminative knowledge distillation framework to transfer a video-based richer representation to an image based representation more effectively. Specifically, we propose the triplet contrast loss (TCL), a novel loss designed for KD. During the KD process, the TCL loss transfers the local structure, exploits the higher order information, and mitigates the misalignment of the heterogeneous output of teacher and student networks. Compared with other losses for KD, the proposed TCL loss selectively transfers the local discriminative features from teacher to student, making it effective in the ReID. Besides the TCL loss, we adopt mutual learning to regularize both the teacher and student networks training. Extensive experiments demonstrate the effectiveness of our method on the MARS, DukeMTMC-VideoReID and VeRi-776 benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源