首页> 外文学位 >A Bayesian decision theoretical approach to supervised learning, selective sampling, and empirical function optimization.
【24h】

A Bayesian decision theoretical approach to supervised learning, selective sampling, and empirical function optimization.

机译:用于监督学习,选择性抽样和经验函数优化的贝叶斯决策理论方法。

获取原文
获取原文并翻译 | 示例

摘要

Many have used the principles of statistics and Bayesian decision theory to model specific learning problems. It is less common to see models of the processes of learning in general. One exception is the model of the supervised learning process known as the "Extended Bayesian Formalism" or EBF. This model is descriptive, in that it can describe and compare learning algorithms. Thus the EBF is capable of modeling both effective and ineffective learning algorithms.;We extend the EBF to model un-supervised learning, semi-supervised learning, supervised learning, and empirical function optimization. We also generalize the utility model of the EBF to deal with non-deterministic outcomes, and with utility functions other than 0-1 loss. Finally, we modify the EBF to create a "prescriptive" learning model, meaning that, instead of describing existing algorithms, our model defines how learning should optimally take place. We call the resulting model the Unified Bayesian Decision Theoretical Model, or the UBDTM. WE show that this model can serve as a cohesive theory and framework in which a broad range of questions can be analyzed and studied. Such a broadly applicable unified theoretical framework is one of the major missing ingredients of machine learning theory.;Using the UBDTM, we concentrate on supervised learning and empirical function optimization. We then use the UBDTM to reanalyze many important theoretical issues in Machine Learning, including No-Free-Lunch, utility implications, and active learning. We also point forward to future directions for using the UBDTM to model learnability, sample complexity, and ensembles. We also provide practical applications of the UBDTM by using the model to train a Bayesian variation to the CMAC supervised learner in closed form, to perform a practical empirical function optimization task, and as part of the guiding principles behind an ongoing project to create an electronic and print corpus of tagged ancient Syriac texts using active learning.;Keywords: Machine Learning, Supervised Learning, Function Optimization, Empirical Function Optimization, Statistics, Bayes, Bayes Law, Bayesian, Bayesian Learning, Decision Theory, Utility Theory, Unified Bayesian Model, UBM, Unified Bayesian Decision Theoretical Model, UBDTM, Learning Framework, No-Free-Lunch, NFL, a priori Learning, Extended Bayesian Formalism, EBF, Bias, Inductive Bias, Hypothesis Space, Function Class, Active Learning, Uncertainty Sampling, Query by Uncertainty, Query by Committee, Expected Value of Sample Information, EVSI.
机译:许多人已经使用统计原理和贝叶斯决策理论对特定的学习问题进行建模。一般情况下,很少能看到学习过程的模型。一种例外是被称为“扩展贝叶斯形式主义”或EBF的监督学习过程模型。该模型具有描述性,因为它可以描述和比较学习算法。因此,EBF能够对有效和无效的学习算法进行建模。;我们将EBF扩展为对无监督学习,半监督学习,监督学习和经验函数优化进行建模。我们还推广了EBF的效用模型,以处理不确定性结果以及除0-1损失以外的效用函数。最后,我们修改EBF以创建一个“说明性”学习模型,这意味着我们的模型(而不是描述现有算法)定义了最佳学习方式。我们将结果模型称为统一贝叶斯决策理论模型或UBDTM。我们表明,该模型可以作为一种凝聚力的理论和框架,在其中可以分析和研究各种各样的问题。如此广泛适用的统一理论框架是机器学习理论的主要缺失要素之一。通过使用UBDTM,我们专注于监督学习和经验函数优化。然后,我们使用UBDTM重新分析机器学习中的许多重要理论问题,包括免费午餐,实用程序含义和主动学习。我们还指出了使用UBDTM建模可学习性,样本复杂度和合奏的未来方向。我们还通过使用模型以封闭形式训练贝叶斯变体到CMAC监督的学习者,执行实际的经验函数优化任务以及作为正在进行的创建电子项目的指导原则的一部分,来提供UBDTM的实际应用。关键字:机器学习,监督学习,函数优化,经验函数优化,统计,贝叶斯,贝叶斯定律,贝叶斯,贝叶斯学习,决策理论,效用理论,统一贝叶斯模型, UBM,统一贝叶斯决策理论模型,UBDTM,学习框架,无午餐,NFL,先验学习,扩展贝叶斯形式主义,EBF,偏差,归纳偏差,假设空间,函数类,主动学习,不确定性抽样,查询依据不确定性,委员会查询,样本信息的期望值,EVSI。

著录项

  • 作者

    Carroll, James L.;

  • 作者单位

    Brigham Young University.;

  • 授予单位 Brigham Young University.;
  • 学科 Computer Science.
  • 学位 Ph.D.
  • 年度 2010
  • 页码 357 p.
  • 总页数 357
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号