首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Generalized Tuning of Distributional Word Vectors for Monolingual and Cross-Lingual Lexical Entailment
【24h】

Generalized Tuning of Distributional Word Vectors for Monolingual and Cross-Lingual Lexical Entailment

机译:单语和跨语言词汇蕴含的分布词向量的广义调整

获取原文

摘要

Lexical entailment (LE; also known as hyponymy-hypernymy or is-a relation) is a core asymmetric lexical relation that supports tasks like taxonomy induction and text generation. In this work, we propose a simple and effective method for fine-tuning distributional word vectors for LE. Our Generalized Lexical EN-tailment model (GLEN) is decoupled from the word embedding model and applicable to any distributional vector space. Yet - unlike existing retrofitting models - it captures a general specialization function allowing for LE-tuning of the entire distributional space and not only the vectors of words seen in lexical constraints. Coupled with a multilingual embedding space, GLEN seamlessly enables cross-lingual LE detection. We demonstrate the effectiveness of GLEN in graded LE and report large improvements (over 20% in accuracy) over state-of-the-art in cross-lingual LE detection.
机译:词汇蕴含(LE;也称为下位-上位或关系)是一种核心的不对称词汇关系,它支持诸如分类法归纳和文本生成之类的任务。在这项工作中,我们提出了一种简单有效的方法来微调LE的分布词向量。我们的广义词法EN尾部模型(GLEN)与词嵌入模型分离,并且适用于任何分布向量空间。但是,与现有的改造模型不同,它捕获了通用的专业化功能,可以对整个分布空间进行LE调整,而不仅限于在词法约束中看到的单词向量。结合多语言嵌入空间,GLEN无缝地启用了跨语言LE检测。我们证明了GLEN在分级LE中的有效性,并报告了跨语言LE检测方面的最新技术取得了巨大进步(准确性超过20%)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号