首页> 外文学位 >Convex large margin training techniques: Unsupervised, semi-supervised, and robust support vector machines.
【24h】

Convex large margin training techniques: Unsupervised, semi-supervised, and robust support vector machines.

机译:凸大余量训练技术:无监督,半监督和鲁棒的支持向量机。

获取原文
获取原文并翻译 | 示例

摘要

Support vector machines (SVMs) have been a dominant machine learning technique for more than a decade. The intuitive principle behind SVM training is to find the maximum margin separating hyperplane for a given set of binary labeled training data. Previously, SVMs have been primarily applied to supervised learning problems, where target class labels are provided with the data. Developing unsupervised extensions to SVMs, where no class labels are given, turns out to be a challenging problem. In this dissertation, I propose a principled approach for unsupervised and semi-supervised SVM training by formulating convex relaxations of the natural training criterion: find a (constrained) labeling that would yield an optimal SVM classifier on the resulting labeled training data. This relaxation yields a semidefinite program (SDP) that can be solved in polynomial time. The resulting training procedures can be applied to two-class and multi-class problems, and ultimately to the multivariate case, achieving high quality results in each case. In addition to unsupervised training, I also consider the problem of reducing the outlier sensitivity of standard supervised SVM training. Here I show that a similar convex relaxation can be applied to improve the robustness of SVMs by explicitly suppressing outliers in the training process. The proposed approach can achieve superior results to standard SVMs in the presence of outliers.
机译:支持向量机(SVM)在十多年来一直是占主导地位的机器学习技术。 SVM训练背后的直观原理是找到给定的二进制标记训练数据集的最大余量分隔超平面。以前,SVM主要用于有监督的学习问题,其中随数据提供目标类别标签。在没有给出类标签的情况下,开发对SVM的无监督扩展是一个具有挑战性的问题。在本文中,我提出了一种通过对自然训练准则进行凸松弛来提出无监督和半监督SVM训练的原则方法:找到一个(受约束的)标记,该标记将在生成的标记训练数据上产生最佳的SVM分类器。这种松弛产生可以在多项式时间内求解的半定程序(SDP)。由此产生的培训程序可以应用于两类和多类问题,最终可以应用于多变量情况,从而在每种情况下均能获得高质量的结果。除了无监督的训练之外,我还考虑了降低标准有监督的SVM训练的异常敏感度的问题。在这里,我表明可以通过显式抑制训练过程中的离群值来应用相似的凸松弛来提高SVM的鲁棒性。在存在异常值的情况下,所提出的方法可以实现优于标准SVM的结果。

著录项

  • 作者

    Xu, Linli.;

  • 作者单位

    University of Waterloo (Canada).;

  • 授予单位 University of Waterloo (Canada).;
  • 学科 Computer Science.
  • 学位 Ph.D.
  • 年度 2007
  • 页码 138 p.
  • 总页数 138
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 自动化技术、计算机技术;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号