首页> 外文学位 >Data Sharing in Peer-Assessment Systems for Education
【24h】

Data Sharing in Peer-Assessment Systems for Education

机译:教育对等评​​估系统中的数据共享

获取原文
获取原文并翻译 | 示例

摘要

Fifty years of research has found great potential for peer assessment as a pedagogical approach. With peer assessment, not only do students receive more copious assessments; they also learn to become assessors. In recent decades, more educational peer assessments have been facilitated by online systems. Those online systems are designed differently to suit different class settings and student groups; therefore, their designs are all different from each other: rating-based or ranking-based, reviews assigned randomly or to fixed groups, anonymous or onymous review, etc. Though there are different systems and a large number of users for each, there is a dearth of comparisons between different designs. This is mainly caused by the fact that the data generated by peer assessment systems is stored and analyzed separately; there is no standard for data sharing in this research community.;In this work, we focus on the data sharing between educational peer assessment systems. We designed a Peer-Review Markup Language (PRML) as a generic data schema to modeling and sharing data generated by different educational peer assessment systems. Based on PRML, a data warehouse can be built and different systems can ETL (Extract, Transform and Load) their data, contribute the data to the common data warehouse and share the data with other researchers.;Making use of data shared by different peer assessment systems can help researchers to answer more general research questions, e.g. are reviewers more reliable in ranking-based or rating-based peer assessment? To answer this question, we designed algorithms to evaluate assessors' reliabilities based on their rating/ranking against the global ranks of the artifacts they have reviewed. These algorithms are suitable for data from both rating-based and ranking-based peer assessment systems. The experiments were done based on more than 15,000 peer assessments from multiple peer assessment systems. We found that the assessors in ranking-based peer assessments are more reliable than the assessors in rating-based peer assessments. Further analysis also demonstrated that the assessors in ranking-based assessments tend to assess the more differentiable artifacts correctly, but there no such pattern for rating-based assessors.;Another research question that can be answered with this shared data is, how do collusions harm the peer review process? Ideally, if only a small number of students try to "game" the peer assessment process, the overall validity will not be affected much. However, one researcher found from his experience that more students became colluders through a semester -- they gave each other high scores, or, even worse, gave high scores to every artifact they reviewed. In the worst case, a big number of colluders may make the honest reviewers outliers, which harms the validity of peer assessment. We have defined two different patterns of possible collusions and apply graph mining algorithms to detect the colluders in the data shared with us.
机译:五十年来的研究发现,同伴评估作为一种教学方法具有巨大的潜力。通过同伴评估,学生不仅可以获得更多评估。他们还学会成为评估者。近几十年来,在线系统促进了更多的教育性同行评估。这些在线系统的设计有所不同,以适应不同的班级设置和学生群体;因此,它们的设计互不相同:基于评级或基于排名,随机分配给固定组或固定组的评论,匿名或匿名评论等。尽管系统不同且每个用户都有大量用户,不同设计之间缺乏比较。这主要是由于对等评估系统生成的数据是分别存储和分析的;在这个研究社区中,没有共享数据的标准。在这项工作中,我们专注于教育同伴评估系统之间的数据共享。我们设计了一种Peer-Review标记语言(PRML)作为通用数据模式,以建模和共享由不同的教育同级评估系统生成的数据。基于PRML,可以建立一个数据仓库,并且不同的系统可以对它们的数据进行ETL(提取,转换和加载),将数据贡献给公用数据仓库并与其他研究人员共享数据;利用不同对等方共享的数据评估系统可以帮助研究人员回答更多的一般研究问题,例如审稿人在基于排名或基于评分的同行评估中是否更可靠?为了回答这个问题,我们设计了一些算法来根据评估者的等级/等级与评估者所评估的工件的整体等级来评估评估者的可靠性。这些算法适用于来自基于评级和基于排名的对等评估系统的数据。实验是基于来自多个同级评估系统的15,000多个同级评估进行的。我们发现,基于排名的同行评估中的评估者比基于评级的同行评估中的评估者更加可靠。进一步的分析还表明,基于排名的评估中的评估者倾向于正确评估更具差异性的工件,但基于评估的评估者则没有这种模式。;共享数据可以回答的另一个研究问题是,共谋如何危害同行评审过程?理想情况下,如果只有少数学生尝试“游戏”同伴评估过程,则总体有效性不会受到太大影响。但是,一位研究人员从他的经历中发现,有更多的学生在一个学期内成为合谋者-他们给彼此高分,或者甚至更糟的是,他们审查的每件文物都给高分。在最坏的情况下,大量合谋者可能会使诚实的审稿人离群值,这损害了同行评议的有效性。我们定义了两种可能的共谋模式,并应用图挖掘算法来检测与我们共享的数据中的共谋。

著录项

  • 作者

    Song, Yang.;

  • 作者单位

    North Carolina State University.;

  • 授予单位 North Carolina State University.;
  • 学科 Computer science.;Educational technology.;Curriculum development.
  • 学位 Ph.D.
  • 年度 2017
  • 页码 101 p.
  • 总页数 101
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号