首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Learning Understandable Neural Networks With Nonnegative Weight Constraints
【24h】

Learning Understandable Neural Networks With Nonnegative Weight Constraints

机译:学习具有非负权重约束的可理解神经网络

获取原文
获取原文并翻译 | 示例
           

摘要

People can understand complex structures if they relate to more isolated yet understandable concepts. Despite this fact, popular pattern recognition tools, such as decision tree or production rule learners, produce only flat models which do not build intermediate data representations. On the other hand, neural networks typically learn hierarchical but opaque models. We show how constraining neurons' weights to be nonnegative improves the interpretability of a network's operation. We analyze the proposed method on large data sets: the MNIST digit recognition data and the Reuters text categorization data. The patterns learned by traditional and constrained network are contrasted to those learned with principal component analysis and nonnegative matrix factorization.
机译:如果人们与更孤立但可理解的概念相关联,则他们可以理解复杂的结构。尽管如此,流行的模式识别工具(例如决策树或生产规则学习器)仅生成不构建中间数据表示形式的平面模型。另一方面,神经网络通常学习分层但不透明的模型。我们展示了如何将神经元的权重约束为非负数可以改善网络操作的可解释性。我们在大型数据集上分析所提出的方法:MNIST数字识别数据和Reuters文本分类数据。传统和约束网络学习的模式与主成分分析和非负矩阵分解学习的模式形成对比。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号