...
首页> 外文期刊>Clinical neurophysiology >Inter-rater reliability of sleep cyclic alternating pattern (CAP) scoring and validation of a new computer-assisted CAP scoring method.
【24h】

Inter-rater reliability of sleep cyclic alternating pattern (CAP) scoring and validation of a new computer-assisted CAP scoring method.

机译:睡眠者间循环交替模式(CAP)评分的评分者间可靠性和一种新的计算机辅助CAP评分方法的验证。

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

OBJECTIVE: To assess inter-rater reliability between different scorers, from different qualified sleep research groups, in scoring visually the Cyclic Alternating Pattern (CAP), to evaluate the performances of a new tool for the computer-assisted detection of CAP, and to compare its output with the data from the different scorers. METHODS: CAP was scored in 11 normal sleep recordings by four different raters, coming from three sleep laboratories. CAP was also scored in the same recordings by means of a new computer-assisted method, implemented in the Hypnolab 1.2 (SWS Soft, Italy) software. Data analysis was performed according to the following steps: (a) the inter-rater reliability of CAP parameters between the four different scorers was carried out by means of the Kendall W coefficient of concordance; (b) the analysis of the agreement between the results of the visual and computer-assisted analysis of CAP parameters was also carried out by means of the Kendall W coefficient; (c) a 'consensus' scoringwas obtained, for each recording, from the four scorings provided by the different raters, based on the score of the majority of scorers; (d) the degree of agreement between each scorer and the consensus score and between the computer-assisted analysis and the consensus score was quantified by means of the Cohen's k coefficient; (e) the differences between the number of false positive and false negative detections obtained in the visual and in the computer-assisted analysis were also evaluated by means of the non-parametric Wilcoxon test. RESULTS: The inter-rater reliability of CAP parameters quantified by the Kendall W coefficient of concordance between the four different scorers was high for all the parameters considered and showed values above 0.9 for total CAP time, CAP time in sleep stage 2 and percentage of A phases in sequence; also CAP rate showed a high value (0.829). The most important global parameters of CAP, including total CAP rate and CAP time, scored by the computer-assisted analysis showed a significant concordance with those obtained by the raters. The agreement between the computer-assisted analysis and the consensus scoring for the assignment of the CAP A phase subtype was not distinguishable from that expected from a human scorer. However, the computer-assisted analysis provided a number of false positives and false negatives significantly higher than that of the visual scoring of CAP. CONCLUSIONS: CAP scoring shows good inter-rater reliability and might be compared in different laboratories the results of which might also be pooled together; however, caution should always be taken because of the variability which can be expected in the classical sleep staging. The computer-assisted detection of CAP can be used with some supervision and correction in large studies when only general parameters such as CAP rate are considered; more editing is necessary for the correct use of the other results. SIGNIFICANCE: This article describes the first attempt in the literature to evaluate in a detailed waythe inter-rater reliability in scoring CAP parameters of normal sleep and the performances of a human-supervised computerized automatic detection system.
机译:目的:评估来自不同合格睡眠研究组的不同评分者之间的评分者间信度,以视觉方式对循环交替模式(CAP)进行评分,评估用于计算机辅助检测CAP的新工具的性能,并进行比较其输出与来自不同计分器的数据。方法:来自三个睡眠实验室的四个不同的评分者在11次正常睡眠记录中对CAP进行了评分。 CAP还通过在Hypnolab 1.2(SWS Soft,意大利)软件中实施的新计算机辅助方法在相同的录音中获得了评分。根据以下步骤进行数据分析:(a)通过Kendall W一致性系数进行四个不同评分器之间CAP参数的评分者间可靠性; (b)还通过肯德尔W系数对视觉和计算机辅助分析CAP参数的结果之间的一致性进行了分析; (c)根据大多数记分员的得分,从不同评价者提供的四个得分中,为每个记录获得“共识”得分; (d)通过科恩的k系数量化每个评分者与共识评分之间以及计算机辅助分析与共识评分之间的一致程度; (e)还通过非参数Wilcoxon检验评估了在视觉和计算机辅助分析中获得的假阳性和假阴性检测数之间的差异。结果:由四个不同评分者之间的肯德尔W一致性系数量化的CAP参数在评分者间的可靠性在所有考虑的参数中均很高,并且其总CAP时间,睡眠阶段2的CAP时间和A的百分比显示出高于0.9的值相序CAP率也很高(0.829)。计算机辅助分析对CAP的最重要的全局参数(包括总CAP速率和CAP时间)进行评分,与评估者获得的参数具有显着一致性。计算机辅助分析与CAP A期亚型分配的共识评分之间的一致性与人类评分者的预期相差无几。但是,计算机辅助分析提供的假阳性和假阴性的数量明显高于CAP的视觉评分。结论:CAP评分显示出良好的评分者间可靠性,可以在不同的实验室进行比较,其结果也可以汇总在一起;但是,由于经典睡眠阶段可能会出现变化,因此应始终谨慎。如果只考虑诸如CAP速率之类的一般参数,则在大型研究中可以通过计算机辅助检测CAP,并进行一些监督和纠正。为了正确使用其他结果,需要进行更多编辑。意义:本文描述了文献中首次尝试详细评估评估正常睡眠CAP参数的评分间可靠性以及人类监督的计算机自动检测系统的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号