首页> 外文会议>Italian workshop on neural nets >A Short Review of Statistical Learning Theory
【24h】

A Short Review of Statistical Learning Theory

机译:统计学习理论的简短回顾

获取原文

摘要

Statistical learning theory has emerged in the last few years as a solid and elegant framework for studying the problem of learning from examples. Unlike previous "classical" learning techniques, this theory completely characterizes the necessary and sufficient conditions for a learning algorithm to be consistent. The key quantity is the capacity of the set of hypotheses employed in the learning algorithm and the goal is to control this capacity depending on the given examples. Structural risk minimization (SRM) is the main theoretical algorithm which implements this idea. SRM is inspired and closely related to regularization theory. For practical purposes, however, SRM is a very hard problem and impossible to implement when dealing with a large number of examples. Techniques such as support vector machines and older regularization networks are a viable solution to implement the idea of capacity control. The paper also discusses how these techniques can be formulated as a variational problem in a Hilbert space and show how SRM can be extended in order to implement both classical regularization networks and support vector machines.
机译:统计学习理论在过去几年中出现了作为研究从示例学习问题的稳固而优雅的框架。与以前的“古典”学习技术不同,该理论完全表征了学习算法的必要和充分条件。关键数量是学习算法中采用的一组假设的容量,并且目标是根据给定的示例控制此容量。结构风险最小化(SRM)是实现这个想法的主要理论算法。 SRM受到启发和与正则化理论密切相关。然而,为了实际目的,SRM是一个非常难的问题,并且在处理大量示例时不可能实施。支持向量机和旧的正则化网络等技术是实现容量控制概念的可行解决方案。本文还讨论了如何将这些技术作为Hilbert空间中的变分问题配制,并显示如何扩展SRM以实现经典正则化网络和支持向量机。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号