首页> 外文期刊>Knowledge-Based Systems >Joint metric and feature representation learning for unsupervised domain adaptation
【24h】

Joint metric and feature representation learning for unsupervised domain adaptation

机译:联合度量和特征表示学习,实现无监督领域自适应

获取原文
获取原文并翻译 | 示例
           

摘要

Domain adaptation algorithms leverage the knowledge from a well-labeled source domain to facilitate the learning of an unlabeled target domain, in which the source domain and the target domain are related but drawn from different data distributions. Existing domain adaptation approaches are either trying to explicitly mitigate the data distribution gaps by minimizing some distance metrics, or attempting to learn a new feature representation by revealing the shared factors and use the learned representation as a bridge of knowledge transfer. Recently, several researchers claim that jointly optimizing the distribution gaps and latent factors can learn a better transfer model. In this paper, therefore, we propose a novel approach which simultaneously mitigates the data distribution and learns a feature representation via a common objective. Specifically, we present joint metric and feature representation learning (JMFL) for unsupervised domain adaptation. JMFL, on the one hand, minimizes the domain discrepancy between the source domain and the target domain. On the other hand, JMFL reveals the shared underlying factors between the two domains to learn a new feature representation. We smoothly incorporate the two aspects into a unified objective and present a detailed optimization method. Extensive experiments on several open benchmarks verify that our approach achieves state-of-the-art results with significant improvements. (C) 2019 Elsevier B.V. All rights reserved.
机译:域自适应算法利用来自标记良好的源域的知识来促进对未标记的目标域的学习,在该域中,源域和目标域是相关的,但来自不同的数据分布。现有的领域适应方法要么试图通过最小化一些距离度量来显式地缓解数据分布的差距,要么试图通过揭示共享因素来学习新的特征表示并将所学习的表示用作知识转移的桥梁。最近,一些研究人员声称,共同优化分配差距和潜在因素可以学习更好的转移模型。因此,在本文中,我们提出了一种新颖的方法,该方法可同时减轻数据分布并通过一个共同的目标学习特征表示。具体来说,我们提出了针对无监督域自适应的联合度量和特征表示学习(JMFL)。一方面,JMFL使源域和目标域之间的域差异最小化。另一方面,JMFL揭示了两个域之间共享的潜在因素,以学习新的特征表示。我们将这两个方面顺利地纳入一个统一的目标,并提出了一种详细的优化方法。在几个开放基准上进行的广泛实验证明,我们的方法通过重大改进而达到了最新的结果。 (C)2019 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号