首页> 外文期刊>Advances in Science, Technology and Engineering Systems >Retrieving Dialogue History in Deep Neural Networks for Spoken Language Understanding
【24h】

Retrieving Dialogue History in Deep Neural Networks for Spoken Language Understanding

机译:检索深度神经网络中的对话历史以了解口语

获取原文
           

摘要

In this paper, we propose a revised version of the semantic decoder for multi-label classification task in the spoken language understanding (SLU) pilot task of the Dialog State Tracking Challenge 5 (DSTC5). Our model concatenates two deep neural networks – a Convolutional Neural Network (CNN) and a Recurrent Neural Networks (RNN) – for detecting semantic meaning of incoming utterance with the assistance of algorithm adaptation method. In order to evaluate the robustness of our proposed models, comparative experiments on the DSTC5 dialogue datasets are conducted. Experimental results show that the proposed models outperform most of the submitted models in the DSTC5 in terms of F1-score. Without any manually designed features or delexicalization, our model has proven its efficiency of tackling the multi-label SLU task, using only publicly available pre-trained word vectors. Our model is capable of retrieving the dialogue history, and thereby it could build the concise concept structure by employing the pragmatic intention as well as semantic meaning of utterances. The architecture of our semantic decoder has a potential to be applicable to other variety of human-to-human dialogues to achieve SLU.
机译:在本文中,我们为对话状态跟踪挑战5(DSTC5)的口语理解(SLU)试验任务中的多标签分类任务提出了语义解码器的修订版。我们的模型连接了两个深层神经网络-卷积神经网络(CNN)和递归神经网络(RNN)-借助算法自适应方法来检测传入话语的语义。为了评估我们提出的模型的鲁棒性,对DSTC5对话数据集进行了对比实验。实验结果表明,在F1评分方面,建议的模型优于DSTC5中提交的大多数模型。没有任何手动设计的功能或无词法化,我们的模型仅使用公开可用的预训练词向量证明了其解决多标签SLU任务的效率。我们的模型能够检索对话历史,从而可以通过运用话语的语用意图和语义含义来构建简洁的概念结构。我们的语义解码器的体系结构有可能适用于其他各种人与人对话以实现SLU。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号