首页> 外文期刊>Electronic Journal of Statistics >A Fisher consistent multiclass loss function with variable margin on positive examples
【24h】

A Fisher consistent multiclass loss function with variable margin on positive examples

机译:正样本具有可变余量的Fisher一致多类损失函数

获取原文
           

摘要

The concept of pointwise Fisher consistency (or classification calibration) states necessary and sufficient conditions to have Bayes consistency when a classifier minimizes a surrogate loss function instead of the 0-1 loss. We present a family of multiclass hinge loss functions defined by a continuous control parameter $lambda$ representing the margin of the positive points of a given class. The parameter $lambda$ allows shifting from classification uncalibrated to classification calibrated loss functions. Though previous results suggest that increasing the margin of positive points has positive effects on the classification model, other approaches have failed to give increasing weight to the positive examples without losing the classification calibration property. Our $lambda$-based loss function can give unlimited weight to the positive examples without breaking the classification calibration property. Moreover, when embedding these loss functions into the Support Vector Machine’s framework ($lambda$-SVM), the parameter $lambda$ defines different regions for the Karush—Kuhn—Tucker conditions. A large margin on positive points also facilitates faster convergence of the Sequential Minimal Optimization algorithm, leading to lower training times than other classification calibrated methods. $lambda$-SVM allows easy implementation, and its practical use in different datasets not only supports our theoretical analysis, but also provides good classification performance and fast training times.
机译:逐点Fisher一致性(或分类校准)的概念陈述了当分类器将替代损失函数(而不是0-1损失)最小化时具有Bayes一致性的必要条件和充分条件。我们提出了一系列由连续控制参数$ lambda $定义的多类铰链损耗函数,这些参数代表给定类的正点的边距。参数$ lambda $允许从未经校准的分类转换为经过分类的损失函数。尽管先前的结果表明增加正点的余量对分类模型具有积极影响,但其他方法也未能在不丢失分类校准属性的情况下增加正例的权重。我们基于$ lambda $的损失函数可以对正例进行无限加权,而不会破坏分类校准属性。此外,将这些损失函数嵌入到支持向量机的框架($ lambda $ -SVM)中时,参数$ lambda $定义了Karush-Kuhn-Tucker条件的不同区域。在正点上留有较大的余量还可以促进顺序最小优化算法的更快收敛,与其他分类校准方法相比,可以减少训练时间。 $ lambda $ -SVM易于实现,并且在不同数据集中的实际使用不仅支持我们的理论分析,而且还提供了良好的分类性能和快速的训练时间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号