【24h】

Universal Well-Calibrated Algorithm for On-Line Classification

机译:通用的经过良好校准的在线分类算法

获取原文
获取原文并翻译 | 示例

摘要

We study the problem of on-line classification in which the prediction algorithm is given a "confidence level" 1 ― δ and is required to output as its prediction a range of labels (intuitively, those labels deemed compatible with the available data at the level δ) rather than just one label; as usual, the examples are assumed to be generated independently from the same probability distribution P. The prediction algorithm is said to be "well-calibrated" for P and δ if the long-run relative frequency of errors does not exceed δ almost surely w.r, to P. For well-calibrated algorithms we take the number of "uncertain" predictions (i.e., those containing more than one label) as the principal measure of predictive performance. The main result of this paper is the construction of a prediction algorithm which, for any (unknown) P and any δ: (a) makes errors independently and with probability δ at every trial (in particular, is well-calibrated for P and δ); (b) makes in the long run no more uncertain predictions than any other prediction algorithm that is well-calibrated for P and δ; (c) processes example n in time O(log n).
机译:我们研究了在线分类的问题,在该问题中,预测算法被赋予“置信度”为1〜δ,并且需要输出一定范围的标签作为其预测(直觉上,那些标签被认为与该级别的可用数据兼容) δ),而不仅仅是一个标记;像往常一样,假定示例独立于相同的概率分布P生成。如果长期误差的相对频率几乎确定地不超过δ,则可以将预测算法对P和δ进行“良好校准”对于经过良好校准的算法,我们将“不确定”预测的数量(即那些包含多个标签的预测)作为预测性能的主要指标。本文的主要结果是构建了一个预测算法,该算法针对任何(未知)P和任何δ:(a)在每次试验中均独立且具有概率δ的误差(尤其是针对P和δ进行了很好的校准) ); (b)从长远来看,做出的不确定性预测不会比针对P和δ进行良好校准的任何其他预测算法多; (c)在时间O(log n)中处理示例n。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号