支持向量回归(support vector regression,简称SVR)训练算法需要解决在大规模样本条件下的凸二次规划(quadratic programming,简称QP)问题.尽管此种优化算法的机理已经有了较为明确的认识,但已有的支持向量回归训练算法仍较为复杂且收敛速度较慢.为解决这些问题.首先采用扩展方法使SVR与支撑向量机分类(SVC)具有相似的数学形式,并在此基础上针对大规模样本回归问题提出一种用于SVR的简化SOR(successive overrelaxation)算法.实验表明,这种新的回归训练方法在数据量较大时,相对其他训练方法有较快的收敛速度,特别适于在大规模样本条件下的回归训练算法设计.%Training a SVR(support vector regression)requires the solution of a very large QP(quadratic programming)optimization problem.Despite the fact that this type of problem is well understood,the existing training algorithms are very complex and slow.In order to solve these problems,this paper firstly introduces a new way to make SVR have the similar mathematic form as that of a support vector machine.Then a versatile iterative method,successive overrelaxation,is proposed.Experimental results show that this new method converges considerably faster than other methods that require the presence of a substantial amount of data in memory.The results give guidelines for the application of this method to large domains.
展开▼