首页> 外文期刊>Computational brain & behavior >Limitations of Bayesian Leave-One-Out Cross-Validation for Model Selection
【24h】

Limitations of Bayesian Leave-One-Out Cross-Validation for Model Selection

机译:贝叶斯分析的局限性交叉验证的模型选择

获取原文
获取原文并翻译 | 示例
           

摘要

Cross-validation (CV) is increasingly popular as a generic method to adjudicate between mathematical models of cognition and behavior. In order to measure model generalizability, CV quantifies out-of-sample predictive performance, and the CV preference goes to the model that predicted the out-of-sample data best. The advantages of CV include theoretic simplicity and practical feasibility. Despite its prominence, however, the limitations of CV are often underappreciated. Here, we demonstrate the limitations of a particular form of CV-Bayesian leave-one-out cross-validation or LOO-with three concrete examples. In each example, a data set of infinite size is perfectly in line with the predictions of a simple model (i.e., a general law or invariance). Nevertheless, LOO shows bounded and relatively modest support for the simple model. We conclude that CV is not a panacea for model selection.
机译:交叉验证(CV)是越来越受欢迎泛型方法数学之间的裁决模型的认知和行为。测量模型的普遍性,简历量化样本外预测性能,和简历偏好预测的模型样本外数据最好。包括理论简单和实用可行性。简历的局限性常常被低估了。在这里,我们展示的局限性特殊形式的CV-Bayesian分析交叉验证或LOO-with三混凝土的例子。大小是完全符合的预测一个简单的模型(例如,法律或不变性)。相对温和的对简单模型的支持。我们得出结论,简历不是灵丹妙药模型选择。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号