首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Word-order biases in deep-agent emergent communication
【24h】

Word-order biases in deep-agent emergent communication

机译:深度代理紧急通讯中的字序偏向

获取原文

摘要

Sequence-processing neural networks led to remarkable progress on many NLP tasks. As a consequence, there has been increasing interest in understanding to what extent they process language as humans do. We aim here to uncover which biases such models display with respect to 'natural' word-order constraints. We train models to communicate about paths in a simple gridworld, using miniature languages that reflect or violate various natural language trends, such as the tendency to avoid redundancy or to minimize long-distance dependencies. We study how the controlled characteristics of our miniature languages affect individual learning and their stability across multiple network generations. The results draw a mixed picture. On the one hand, neural networks show a strong tendency to avoid long-distance dependencies. On the other hand, there is no clear preference for the efficient, non-redundant encoding of information that is widely attested in natural language. We thus suggest inoculating a notion of 'effort' into neural networks, as a possible way to make their linguistic behavior more humanlike.
机译:序列处理神经网络在许多NLP任务上取得了显着进步。结果,人们越来越了解他们像人类一样在多大程度上处理语言。我们的目的是揭示此类模型相对于“自然”字序约束所显示的偏见。我们使用反映或违反各种自然语言趋势的微型语言(例如,避免冗余或最小化长距离依赖性的趋势)来训练模型,以在简单的网格世界中就路径进行通信。我们研究了微型语言的受控特征如何影响个体学习以及跨多个网络世代的稳定性。结果不一而足。一方面,神经网络显示出避免长距离依赖的强烈趋势。另一方面,对于自然语言中广泛证明的信息的高效,非冗余编码,没有明显的偏爱。因此,我们建议将“努力”的概念接种到神经网络中,作为使它们的语言行为更人性化的一种可能方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号