首页> 外文期刊>Journal of applied measurement >Quantifying Item Invariance for the Selection of the Least Biased Assessment
【24h】

Quantifying Item Invariance for the Selection of the Least Biased Assessment

机译:量化项目不变性,以选择最不偏见的评估

获取原文
获取原文并翻译 | 示例
           

摘要

An important aspect of educational and psychological measurement and evaluation of individuals is the selection of scales with appropriate evidence of reliability and validity for inferences and uses of the scores for the population of interest. One aspect of validity is the degree to which a scale fairly assesses the construct(s) of interest for members of different subgroups within the population. Typically, this issue is addressed statistically through assessment of differential item functioning (DIF) of individual items, or differential bundle functioning (DBF) of sets of items. When selecting an assessment to use for a given application (e.g., measuring intelligence), or which form of an assessment to use in a given instance, researchers need to consider the extent to which the scales work with all members of the population. Little research has examined methods for comparing the amount or magnitude of DIF/DBF present in two assessments when deciding which assessment to use. The current simulation study examines 6 different statistics for this purpose. Results show that a method based on the random effects item response theory model may be optimal for instrument comparisons, particularly when the assessments being compared are not of the same length.
机译:个人教育和心理测量和对个人评估的一个重要方面是选择具有适当证据可靠性和有效性的可靠性和有效性的尺度,以便对感兴趣的人群的分数的推论和用途。有效性的一个方面是规模公平评估人口中不同亚组成员人口的建筑物的程度。通常,通过评估单个项目的差分项目(DIF)或项目集的差分束功能(DBF)的差分项目,统计地解决了这个问题。选择用于给定申请的评估(例如,测量智能)或在特定情况下使用的评估的形式,研究人员需要考虑尺度与所有人口成员一起工作的程度。在决定使用哪种评估时,研究了对比较两项评估中存在的DIF / DBF量或大小的方法进行了研究。目前的仿真研究为此目的审查了6种不同的统计数据。结果表明,基于随机效应项响应理论模型的方法可以是仪器比较的最佳选择,特别是当比较的评估不相同时。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号