【24h】

Visualizing and Understanding Neural Models in NLP

机译:可视化和理解NLP中的神经模型

获取原文

摘要

While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret. For example it's not clear how they achieve compositionality, building sentence meaning from the meanings of words and phrases. In this paper we describe strategies for visualizing compositionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allowing us to see well-known markedness asymmetries in negation. We then introduce methods for visualizing a unit's salience, the amount that it contributes to the final composed meaning from first-order derivatives. Our general-purpose methods may have wide applications for understanding compositionality and other semantic properties of deep networks.
机译:尽管神经网络已经成功地应用于许多NLP任务,但是基于矢量的模型却很难解释。例如,尚不清楚它们如何实现构图,如何根据单词和短语的含义来建立句子的含义。在本文中,我们描述了在计算机视觉类似工作的启发下,可视化NLP神经模型中组成的策略。我们首先绘制单位值以可视化否定,强化和让词从句的组成,从而使我们能够看到否定中众所周知的标记不对称性。然后,我们介绍可视化单位显着性的方法,该显着性对一阶导数最终构成含义的贡献量。我们的通用方法在理解深度网络的组成和其他语义特性方面可能具有广泛的应用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号