首页> 外文期刊>Information Processing & Management >User simulations for evaluating answers to question series
【24h】

User simulations for evaluating answers to question series

机译:用户模拟,用于评估问题系列的答案

获取原文
获取原文并翻译 | 示例
           

摘要

Recently, question series have become one focus of research in question answering. These series are comprised of individual factoid, list, and "other" questions organized around a central topic, and represent abstractions of user-system dialogs. Existing evaluation methodologies have yet to catch up with this richer task model, as they fail to take into account contextual dependencies and different user behaviors. This paper presents a novel simulation-based methodology for evaluating answers to question series that addresses some of these shortcomings. Using this methodology, we examine two different behavior models: a "QA-styled" user and an "IR-styled" user. Results suggest that an off-the-shelf document retrieval system is competitive with state-of-the-art QA systems in this task. Advantages and limitations of evaluations based on user simulations are also discussed.
机译:近年来,问题系列已经成为问答研究的重点之一。这些系列由围绕一个中心主题组织的各个事实,清单和“其他”问题组成,代表用户系统对话框的抽象。现有的评估方法尚未赶上这种更丰富的任务模型,因为它们没有考虑上下文相关性和不同的用户行为。本文提出了一种新颖的基于模拟的方法,用于评估问题序列的答案,从而解决了其中的一些不足。使用这种方法,我们检查了两种不同的行为模型:“ QA风格”用户和“ IR风格”用户。结果表明,在此任务中,现成的文档检索系统与最新的QA系统具有竞争优势。还讨论了基于用户模拟的评估的优缺点。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号