...
首页> 外文期刊>Psychological Methods >Evaluating Meta-Analytic Methods to Detect Selective Reporting in the Presence of Dependent Effect Sizes
【24h】

Evaluating Meta-Analytic Methods to Detect Selective Reporting in the Presence of Dependent Effect Sizes

机译:评估荟萃分析方法在存在依赖效应大小的情况下检测选择性报告

获取原文
获取原文并翻译 | 示例
           

摘要

Selective reporting of results based on their statistical significance threatens the validity of meta-analytic findings. A variety of techniques for detecting selective reporting, publication bias, or small-study effects are available and are routinely used in research syntheses. Most such techniques are univariate, in that they assume that each study contributes a single, independent effect size estimate to the meta-analysis. In practice, however, studies often contribute multiple, statistically dependent effect size estimates, such as for multiple measures of a common outcome construct. Many methods are available for meta-analyzing dependent effect sizes, but methods for investigating selective reporting while also handling effect size dependencies require further investigation. Using Monte Carlo simulations, we evaluate three available univariate tests for small-study effects or selective reporting, including the trim and fill test, Egger's regression test, and a likelihood ratio test from a three-parameter selection model (3PSM), when dependence is ignored or handled using ad hoc techniques. We also examine two variants of Egger's regression test that incorporate robust variance estimation (RVE) or multilevel meta-analysis (MLMA) to handle dependence. Simulation results demonstrate that ignoring dependence inflates Type I error rates for all univariate tests. Variants of Egger's regression maintain Type I error rates when dependent effect sizes are sampled or handled using RVE or MLMA. The 3PSM likelihood ratio test does not fully control Type I error rates. With the exception of the 3PSM, all methods have limited power to detect selection bias except under strong selection for statistically significant effects.
机译:基于其统计学意义的结果的选择性报告威胁到荟萃分析发现的有效性。可以使用多种用于检测选择性报告,出版偏见或小型研究效果的技术,并通常用于研究合成中。大多数这样的技术都是单变量的,因为他们认为每项研究都对荟萃分析贡献了一个独立的效应大小估计。但是,实际上,研究通常会贡献多个统计上依赖的效果估计值,例如共同结果构建的多个度量。许多方法可用于荟萃分析的依赖性效应大小,但是用于研究选择性报告的方法同时处理效果大小依赖性需要进一步研究。使用Monte Carlo模拟,我们评估了三个可用的单变量测试,以实现小型研究效果或选择性报告,包括内饰和填充测试,EGGER的回归测试以及来自三参数选择模型(3PSM)的可能性比测试,当依赖性为使用临时技术忽略或处理。我们还检查了EGGER回归测试的两个变体,其中结合了可靠的方差估计(RVE)或多级荟萃分析(MLMA)以处理依赖性。仿真结果表明,忽略依赖性将对所有单变量测试的I型错误率充气。当使用RVE或MLMA对依赖效应大小进行采样或处理时,EGGER回归的变体保持I型错误率。 3PSM可能性比测试无法完全控制I型错误率。除3PSM外,除了在统计学上具有重要意义的效果下,所有方法都具有检测选择偏差的能力有限。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号