Global convergence of the sequential minimal optimization (SMO) algorithm for support vector regression (SVR) is studied in this paper. Given $l$ training samples, SVR is formulated as a convex quadratic programming (QP) problem with $l$ pairs of variables. We prove that if two pairs of variables violating the optimality condition are chosen for update in each step and subproblems are solved in a certain way, then the SMO algorithm always stops within a finite number of iterations after finding an optimal solution. Also, efficient implementation techniques for the SMO algorithm are presented and compared experimentally with other SMO algorithms.
展开▼
机译:本文研究了用于支持向量回归(SVR)的序列最小优化(SMO)算法的全局收敛性。给定$ l $训练样本,SVR被公式化为带有$ l $对变量的凸二次规划(QP)问题。我们证明,如果在每个步骤中选择了两对违反最优性条件的变量进行更新,并且以某种方式解决了子问题,那么SMO算法在找到最优解后总是在有限的迭代次数内停止。此外,提出了SMO算法的有效实现技术,并将其与其他SMO算法进行了实验比较。
展开▼