首页> 外文会议>Annual meeting of the Association for Computational Linguistics >DIALOGPT: Large-Scale Generative Pre-training for Conversational Response Generation
【24h】

DIALOGPT: Large-Scale Generative Pre-training for Conversational Response Generation

机译:DIALOGPT:针对会话反应生成的大规模生成性预培训

获取原文

摘要

We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response generation and the development of more intelligent open-domain dialogue systems.
机译:我们提出了一个大型的、可调的神经会话反应生成模型DialoGPT(对话生成预训练变压器)。DialoGPT在2005年至2017年间接受了从Reddit评论链中提取的1.47亿次类似对话的交流培训,在单回合对话设置中,DialoGPT扩展了拥抱脸PyTorch transformer,在自动和人工评估方面实现了接近人类的性能。我们发现,与强基线系统相比,利用DialoGPT的对话系统会产生更相关、内容更丰富、上下文一致的响应。预先训练的模型和训练管道被公开发布,以促进神经反应生成的研究和更智能的开放领域对话系统的开发。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号