首页> 外文会议>Machine Learning and Applications and Workshops (ICMLA), 2011 10th International Conference on >Multiple Nonlinear Subspace Methods Using Subspace-based Support Vector Machines
【24h】

Multiple Nonlinear Subspace Methods Using Subspace-based Support Vector Machines

机译:使用基于子空间的支持向量机的多种非线性子空间方法

获取原文

摘要

In this paper, we propose multiple nonlinear subspace methods (MNSMs), in which each class consists of several subspaces with different kernel parameters. For each class and each candidate kernel parameter, we generate the subspace by KPCA, and obtain the projection length of an input vector onto each subspace. Then, for each class, we define the discriminant function by the sum of the weighted lengths. These weights in the discriminant function are optimized by subspace-based support vector machines (SS-SVMs) so that the margin between classes is maximized while minimizing the classification error. Thus, we can weight the subspaces for each class from the standpoint of class separability. Then, the computational cost of the model selection of MNSMs is lower than that of SS-SVMs because for SS-SVMs two hyper-parameters, which are the kernel parameter and the margin parameter, must be chosen before training. We show the advantages of the proposed method by computer experiments with benchmark data sets.
机译:在本文中,我们提出了多种非线性子空间方法(MNSM),其中每个类由具有不同内核参数的几个子空间组成。对于每个类和每个候选内核参数,我们通过KPCA生成子空间,并获得输入向量在每个子空间上的投影长度。然后,对于每个类别,我们通过加权长度的总和定义判别函数。判别函数中的这些权重通过基于子空间的支持向量机(SS-SVM)进行了优化,以使类别之间的余量最大化,同时使分类误差最小。因此,我们可以从类可分离性的角度对每个类的子空间进行加权。然后,MNSM的模型选择的计算成本比SS-SVM的计算成本低,因为对于SS-SVM,必须在训练之前选择两个超参数,即内核参数和裕度参数。通过具有基准数据集的计算机实验,我们展示了该方法的优点。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号