论文标题

无监督域自适应人重新识别的联合视觉和时间一致性

Joint Visual and Temporal Consistency for Unsupervised Domain Adaptive Person Re-Identification

论文作者

Li, Jianing, Zhang, Shiliang

论文摘要

无监督的域自适应人员重新识别(REID)是具有挑战性的,因为源和目标域之间存在较大的域间隙,并且缺乏目标域上的标记数据。本文通过共同执行当地单次分类和全球多级分类的结合来解决这一挑战。本地的单速分类在具有不同人ID的培训批次中分配图像,然后采用自适应分类(SAC)模型来对其进行分类。通过使用基于内存的时间引导集群(MTC)预测整个未标记的训练集上的标签来实现全局多类分类。 MTC通过考虑视觉相似性和时间一致性来确保标签预测的质量来预测多级标签。将两个分类模型组合在一个统一的框架中,该框架有效地利用了未标记的数据进行判别特征学习。三个大规模REID数据集的实验结果证明了在无监督和无监督的域自适应REID任务中提出的方法的优越性。例如,在无监督的设置下,我们的方法的表现优于最近无监督的域自适应方法,该方法利用了更多的标签进行培训。

Unsupervised domain adaptive person Re-IDentification (ReID) is challenging because of the large domain gap between source and target domains, as well as the lackage of labeled data on the target domain. This paper tackles this challenge through jointly enforcing visual and temporal consistency in the combination of a local one-hot classification and a global multi-class classification. The local one-hot classification assigns images in a training batch with different person IDs, then adopts a Self-Adaptive Classification (SAC) model to classify them. The global multi-class classification is achieved by predicting labels on the entire unlabeled training set with the Memory-based Temporal-guided Cluster (MTC). MTC predicts multi-class labels by considering both visual similarity and temporal consistency to ensure the quality of label prediction. The two classification models are combined in a unified framework, which effectively leverages the unlabeled data for discriminative feature learning. Experimental results on three large-scale ReID datasets demonstrate the superiority of proposed method in both unsupervised and unsupervised domain adaptive ReID tasks. For example, under unsupervised setting, our method outperforms recent unsupervised domain adaptive methods, which leverage more labels for training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源