首页> 外文会议>International Workshop on Biometrics and Forensics >Spatial-Temporal Omni-Scale Feature Learning for Person Re-Identification
【24h】

Spatial-Temporal Omni-Scale Feature Learning for Person Re-Identification

机译:时空全方位特征学习用于人员重新识别

获取原文

摘要

State-of-the-art person re-identification (ReID) models use Convolutional Neural Networks (CNN) for feature extraction and comparison. Often these models fail to recognize all the intra- and inter-class variations that emerge in person ReID, making it harder to discriminate between data subjects. In this paper we seek to reduce these problems and improve performance by combining two state-of-the-art models. We use the Omni-Scale Network (OSNet) as our CNN to test the Market1501 and DukeMTMC-ReID datasets for person ReID. To fully utilize the potential of these datasets, we apply the spatialtemporal constraint which extracts the camera ID and timestamp from each image to form a distribution. We combine these two methods to create a hybrid model titled Spatial-Temporal OmniScale Network (st-OSNet). Our model attains a Rank-1 (R1) accuracy of 98.2% and mean average precision (mAP) of 92.7% for the Market1501 dataset. For the DukeMTMC-reID dataset our model achieves 94.3% R1 and 86.1% mAP, hereby surpassing the results of OSNet by a large margin for both datasets (94.3%, 86.4%, 88.4%, 76.1%, respectively).
机译:最新的人员重新识别(ReID)模型使用卷积神经网络(CNN)进行特征提取和比较。通常,这些模型无法识别出ReID中出现的所有类内和类间变异,因此很难区分数据主体。在本文中,我们力求通过结合两个最新模型来减少这些问题并提高性能。我们使用全方位网络(OSNet)作为我们的CNN,以测试Market1501和DukeMTMC-ReID数据集的人ReID。为了充分利用这些数据集的潜力,我们应用了时空约束,该约束从每个图像中提取相机ID和时间戳以形成分布。我们结合这两种方法来创建一个名为时空OmniScale网络(st-OSNet)的混合模型。我们的模型对Market1501数据集达到了18.2%的Rank-1(R1)准确性和92.7%的平均平均精度(mAP)。对于DukeMTMC-reID数据集,我们的模型实现了94.3%的R1和86.1%的mAP,从而大大超过了OSNet的两个数据集的结果(分别为94.3%,86.4%,88.4%,76.1%)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号