首页> 外文会议>Seventh joint conference on lexical and computational semantics >Invited Talk: Linguists for Deep Learning; or How I Learned to Stop Worrying and Love Neural Networks
【24h】

Invited Talk: Linguists for Deep Learning; or How I Learned to Stop Worrying and Love Neural Networks

机译:邀请演讲:深度学习语言学家;或我如何学会戒除和爱护神经网络

获取原文
获取原文并翻译 | 示例

摘要

The rise of deep learning (DL) might seem initially to mark a low point for linguists hoping to learn from, and contribute to, the field of statistical NLP. In building DL systems, the decisive factors tend to be data, computational resources, and optimization techniques, with domain expertise in a supporting role. Nonetheless, at least for semantics and pragmatics, I argue that DL models are potentially the best computational implementations of linguists' ideas and theories that we've ever seen. At the lexical level, symbolic representations are inevitably incomplete, whereas learned distributed representations have the potential to capture the dense interconnections that exist between words, and DL methods allow us to infuse these representations with information from contexts of use and from structured lexical resources. For semantic composition, previous approaches tended to represent phrases and sentences in partial, idiosyncratic ways; DL models support comprehensive representations and might yield insights into flexible modes of semantic composition that would be unexpected from the point of view of traditional logical theories. And when it comes to pragmatics, DL is arguably what the field has been looking for all along: a flexible set of tools for representing language and context together, and for capturing the nuanced, fallible ways in which langage users reason about each other's intentions. Thus, while linguists might find it dispiriting that the day-to-day work of DL involves mainly fund-raising to support hyperparameter tuning on expensive machines, I argue that it is worth the tedium for the insights into language that this can (unexpectedly) deliver.
机译:对于希望向统计学NLP领域学习并为之做出贡献的语言学家来说,深度学习(DL)的兴起最初似乎标志着一个低谷。在构建DL系统中,决定性因素往往是数据,计算资源和优化技术,而领域专业知识则起着辅助作用。尽管如此,至少对于语义学和语用学而言,我认为DL模型可能是我们所见过的语言学家的思想和理论的最佳计算实现。在词汇层次上,符号表示不可避免地是不完整的,而学习的分布式表示则有可能捕获单词之间存在的密集互连,而DL方法使我们能够向这些表示注入使用上下文和结构化词汇资源中的信息。对于语义合成,以前的方法倾向于以部分,特质的方式表示短语和句子。 DL模型支持全面的表示,并且可能会深入了解从传统逻辑理论的角度来看意想不到的灵活语义组合模式。就语用而言,可以说DL一直是该领域一直在寻找的东西:一组灵活的工具,用于一起表示语言和上下文,并捕获语言用户对彼此意图的推理的细微,易犯错误的方式。因此,尽管语言学家可能会感到沮丧的是,DL的日常工作主要涉及筹集资金以支持昂贵机器上的超参数调整,但我认为值得一提的是,对于语言的洞察力可以做到这一点(出乎意料)交付。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号