首页> 外文会议>International Conference on Artificial Neural Networks >Deep Domain Knowledge Distillation for Person Re-identification
【24h】

Deep Domain Knowledge Distillation for Person Re-identification

机译:用于人员重新识别的深层知识提炼

获取原文

摘要

Learning generic and robust representations with data from multiple domains is a big challenge in Person RelD. In this paper, we propose an end-to-end framework called Deep Domain Knowledge Distillation (D~2KD) for leaning more generic and robust features with Convolutional Neural Networks (CNNs). Domain-specific knowledge learned by the auxiliary network is transferred to the domain-free subnetwork and guides the optimization of the feature extractor. While person identity information is transferred to the auxiliary network to further accurately identify domain classes. In the test period, just with a single base model as the feature extractor, we improve the R.ank-1 and mAP by a clear margin. Experiments on Market-1501, CUHK03 and DukeMTMC-relD demonstrate the effectiveness of our method.
机译:学习来自多个域的数据的通用且鲁棒的表示形式是Person RelD的一大挑战。在本文中,我们提出了一个端到端框架,称为深度域知识蒸馏(D〜2KD),以利用卷积神经网络(CNN)来获取更通用和更强大的功能。辅助网络学习到的特定领域知识将被转移到无领域子网中,并指导特征提取器的优化。同时将个人身份信息传输到辅助网络,以进一步准确地标识域类别。在测试期间,仅使用单个基本模型作为特征提取器,我们就明显改善了R.ank-1和mAP。在Market-1501,CUHK03和DukeMTMC-relD上进行的实验证明了我们方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号