首页> 外文会议>International Conference on Pattern Recognition Applications and Methods >Domain Adaptation Transfer Learning by Kernel Representation Adaptation
【24h】

Domain Adaptation Transfer Learning by Kernel Representation Adaptation

机译:通过内核表示自适应进行域适应传输学习

获取原文

摘要

Domain adaptation, where no labeled target data is available, is a challenging task. To solve this problem, we first propose a new SVM based approach with a supplementary Maximum Mean Discrepancy (MMD)-like constraint. With this heuristic, source and target data are projected onto a common subspace of a Reproducing Kernel Hilbert, Space (RKHS) where both data distributions are expected to become similar. Therefore, a classifier trained on source data might perform well on target data, if the conditional probabilities of labels are similar for source and target data, which is the main assumption of this paper. We demonstrate that adding this constraint does not change the quadratic nature of the optimization problem, so we can use common quadratic optimization tools. Secondly, using the same idea that rendering source and target data similar might ensure efficient transfer learning, and with the same assumption, a Kernel Principal Component Analysis (KPCA) based transfer learning method is proposed. Different from the first heuristic, this second method ensures other higher order moments to be aligned in the RKHS, which leads to better performances. Here again, we select MMD as the similarity measure. Then, a linear transformation is also applied to further improve the alignment between source and target data. We finally compare both methods with other transfer learning methods from the literature to show their efficiency on synthetic and real datasets.
机译:域适应,其中没有标记的目标数据可用,是一个具有挑战性的任务。为了解决这个问题,我们首先提出了一种新的基于SVM的方法,具有补充最大平均差异(MMD)的限制。利用这种启发式,源和目标数据被投影到再现内核Hilbert,空间(RKHS)的常见子空间上,其中包括两个数据分布都会变得相似。因此,上训练源数据的分类器可能会在目标数据以及执行,如果标签的条件概率是用于源和目标数据,这是本文的主要假设类似。我们演示添加此约束不会改变优化问题的二次性质,因此我们可以使用常见的二次优化工具。其次,使用相同的想法,即渲染源和目标数据类似可能确保有效的传输学习,并且用相同的假设,提出了基于内核主成分分析(KPCA)的转移学习方法。与第一启发式不同,这第二种方法确保了在RKHS中对齐的其他更高的阶段,这导致更好的表现。再次,我们选择MMD作为相似度量。然后,还应用线性变换以进一步改善源数据和目标数据之间的对准。我们终于将两种方法与文献中的其他转移学习方法进行了比较,以展示它们对合成和实时数据集的效率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号