首页> 外文会议>International Conference on Artificial Neural Networks >Joint Metric Learning on Riemannian Manifold of Global Gaussian Distributions
【24h】

Joint Metric Learning on Riemannian Manifold of Global Gaussian Distributions

机译:全球高斯分布瑞马人歧管的联合度量学习

获取原文

摘要

In many computer vision tasks, images or image sets can be modeled as a Gaussian distribution to capture the underlying data distribution. The challenge of using Gaussians to model the vision data is that the space of Gaussians is not a linear space. From the perspective of information geometry, the Gaussians lie on a specific Riemannian Manifold. In this paper, we present a joint metric learning (JML) model on Riemannian Manifold of Gaussian distributions. The distance between two Gaussians is defined as the sum of the Mahalanobis distance between the mean vectors and the log-Euclidean distance (LED) between the covariance matrices. We formulate the multi-metric learning model by jointly learning the Mahalanobis distance and the log-Euclidean distance with pairwise constraints. Sample pair weights are embedded to select the most informative pairs to learn the discriminative distance metric. Experiments on video based face recognition, object recognition and material classification show that JML is superior to the state-of-the-art metric learning algorithms for Gaussians.
机译:在许多计算机视觉任务中,图像或图像集可以被建模为高斯分发以捕获底层数据分布。使用高斯模拟视觉数据的挑战是高斯人的空间不是线性空间。从信息几何形状的角度来看,高斯躺在特定的黎曼歧管上。在本文中,我们介绍了高斯分布的rimannian歧管的联合度量学习(JML)模型。两个高斯之间的距离被定义为平均矢量与协方差矩阵之间的均衡距离(LED)之间的Mahalanobis距离之和。通过联合学习Mahalanobis距离和与成对约束的日志 - 欧几里德距离来制定多度量学习模型。嵌入样品对权重以选择最具信息丰富的对以学习辨别距离度量。基于视频的面部识别,对象识别和材料分类的实验表明,JML优于高斯的高斯的最先进的公制学习算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号