首页> 外文会议>Machine learning >Second Tier for Decision Trees
【24h】

Second Tier for Decision Trees

机译:决策树第二层

获取原文
获取原文并翻译 | 示例

摘要

A learner's performance does not rely only on the representation language and on the algorithm inducing a hypothesis in this language. Also the way the induced hypothesis is interpreted for the needs of concept recognition is of interest. A flexible methodology for hypothesis interpretion is offered by the philosophy of a learner's second tier as originally suggested by Michalski (1987). Here, the potential of this general approach is demonstrated in the framework of numeric decision trees. The second tier improves classification performance, increases ability to handle context, and facilitates transfer of a hypothesis between different contexts.
机译:学习者的表现不仅依赖于表示语言,也不依赖于以此语言推论假设的算法。同样,对于为概念识别的需要而对诱导假设进行解释的方式也很有趣。最初由Michalski(1987)提出的学习者第二层哲学提供了一种灵活的假设解释方法。这里,在数字决策树的框架中证明了这种通用方法的潜力。第二层提高了分类性能,提高了处理上下文的能力,并促进了不同上下文之间的假设传递。

著录项

  • 来源
    《Machine learning》|1996年|293-301|共9页
  • 会议地点 Bari(IT);Bari(IT)
  • 作者

    Miroslav Kubat;

  • 作者单位

    Department of Computer Science University of Ottawa 150 Louis Pasteur, Ottawa Ontario, K1N 6N5 Canada;

  • 会议组织
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 计算机的应用;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号