首页> 外文期刊>Epidemiologic Perspectives and Innovations >Power for tests of interaction: effect of raising the Type I error rate
【24h】

Power for tests of interaction: effect of raising the Type I error rate

机译:交互测试的能力:提高I型错误率的效果

获取原文
           

摘要

Background Power for assessing interactions during data analysis is often poor in epidemiologic studies. This is because epidemiologic studies are frequently powered primarily to assess main effects only. In light of this, some investigators raise the Type I error rate, thereby increasing power, when testing interactions. However, this is a poor analysis strategy if the study is chronically under-powered (e.g. in a small study) or already adequately powered (e.g. in a very large study). To demonstrate this point, this study quantified the gain in power for testing interactions when the Type I error rate is raised, for a variety of study sizes and types of interaction. Methods Power was computed for the Wald test for interaction, the likelihood ratio test for interaction, and the Breslow-Day test for heterogeneity of the odds ratio. Ten types of interaction, ranging from sub-additive through to super-multiplicative, were investigated in the simple scenario of two binary risk factors. Case-control studies of various sizes were investigated (75 cases & 150 controls, 300 cases & 600 controls, and 1200 cases & 2400 controls). Results The strategy of raising the Type I error rate from 5% to 20% resulted in a useful power gain (a gain of at least 10%, resulting in power of at least 70%) in only 7 of the 27 interaction type/study size scenarios studied (26%). In the other 20 scenarios, power was either already adequate (n = 8; 30%), or else so low that it was still weak (below 70%) even after raising the Type I error rate to 20% (n = 12; 44%). Conclusion Relaxing the Type I error rate did not usefully improve the power for tests of interaction in many of the scenarios studied. In many studies, the small power gains obtained by raising the Type I error will be more than offset by the disadvantage of increased "false positives". I recommend investigators should not routinely raise the Type I error rate when assessing tests of interaction.
机译:背景技术在流行病学研究中,评估数据分析过程中的相互作用的能力通常很差。这是因为流行病学研究通常主要仅用于评估主要影响。有鉴于此,一些研究人员在测试交互时提高了I类错误率,从而提高了功能。但是,如果研究长期缺乏动力(例如,在小型研究中)或已经具有足够动力(例如,在大型研究中),则这是一种不良的分析策略。为了证明这一点,本研究针对各种研究规模和相互作用类型,量化了提高I型错误率时测试相互作用的能力增益。方法计算交互作用的Wald检验,交互作用的似然比检验和比数比异质性的Breslow-Day检验的功效。在两个二元风险因素的简单情况下,研究了十种类型的相互作用,范围从子加性到超乘性。调查了各种规模的病例对照研究(75例和150例对照,300例和600例对照,1200例和2400例对照)。结果在27种互动类型/研究中,只有7种将I型错误率从5%提高到20%的策略产生了有用的功率增益(增益至少为10%,功率至少为70%)。研究的大小情景(26%)。在其他20种情况下,功率已经足够了(n = 8; 30%),或者太低以至于即使将I型错误率提高到20%(n = 12; n = 12)仍然很弱(低于70%)。 44%)。结论在许多研究场景中,放宽I型错误率并不能有效提高交互测试的能力。在许多研究中,通过增加I型误差获得的小功率增益将被“假阳性”增加的缺点所抵消。我建议研究人员在评估交互作用测试时不应常规提高I型错误率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号