首页> 外文会议>Machine learning >Discretizing Continuous Attributes While Learning Bayesian Networks
【24h】

Discretizing Continuous Attributes While Learning Bayesian Networks

机译:学习贝叶斯网络时离散化连续属性

获取原文
获取原文并翻译 | 示例

摘要

We introduce a method for learning Bayesian networks that handles the discretization of continuous variables as an integral part of the learning process. The main ingredient in this method is a new metric based on the Minimal Description Length principle for choosing the threshold values for the discretization while learning the Bayesian network structure. This score balances the complexity of the learned discretization and the learned network structure against how well they model the training data. This ensures that the discretization of each variable introduces just enough intervals to capture its interaction with adjacent variables in the network. We formally derive the new metric, study its main properties, and propose an iterative algorithm for learning a discretization policy. Finally, we illustrate its behavior in applications to supervised learning.
机译:我们介绍了一种学习贝叶斯网络的方法,该方法将连续变量的离散化作为学习过程的组成部分进行处理。该方法的主要成分是基于最小描述长度原理的新度量,用于在学习贝叶斯网络结构的同时选择离散化的阈值。该分数将学习的离散化和学习的网络结构的复杂度与他们对训练数据建模的程度进行权衡。这样可以确保每个变量的离散化引入足够的时间间隔以捕获其与网络中相邻变量的交互。我们正式推导了新指标,研究了它的主要特性,并提出了一种学习离散化策略的迭代算法。最后,我们说明了其在监督学习中的行为。

著录项

  • 来源
    《Machine learning》|1996年|157-165|共9页
  • 会议地点 Bari(IT);Bari(IT)
  • 作者单位

    Stanford University Dept. of Computer Science Gates Building 1A Stanford, CA 94305-9010;

    Rockwell Science Center 444 High St., Suite 400 Palo Alto, CA 94301;

  • 会议组织
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 计算机的应用;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号