首页> 外文会议>Conference on empirical methods in natural language processing >A Comparative Study on Regularization Strategies for Embedding-based Neural Networks
【24h】

A Comparative Study on Regularization Strategies for Embedding-based Neural Networks

机译:基于嵌入的神经网络正则化策略的比较研究

获取原文

摘要

This paper aims to compare different regularization strategies to address a common phenomenon, severe overrating, in embedding-based neural networks for NLP. We chose two widely studied neural models and tasks as our testbed. We tried several frequently applied or newly proposed regularization strategies, including penalizing weights (embeddings excluded), penalizing embeddings, re-embedding words, and dropout. We also emphasized on incremental hyperparame-ter tuning, and combining different regu-larizations. The results provide a picture on tuning hyperparameters for neural NLP models.
机译:本文旨在比较不同的正则化策略,以解决基于嵌入的NLP神经网络中的常见现象(严重过高)。我们选择了两个经过广泛研究的神经模型和任务作为我们的测试平台。我们尝试了几种经常应用或新提出的正则化策略,包括惩罚权重(排除嵌入),惩罚嵌入,重新嵌入单词和辍学。我们还强调了增量超参数调整,并结合了不同的规则。结果为神经NLP模型的调整超参数提供了图片。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号