首页> 外文会议> >Re-training Deep Neural Networks to Facilitate Boolean Concept Extraction
【24h】

Re-training Deep Neural Networks to Facilitate Boolean Concept Extraction

机译:重新训练深度神经网络以促进布尔概念提取

获取原文
获取原文并翻译 | 示例

摘要

Deep neural networks are accurate predictors, but their decisions are difficult to interpret, which limits their applicability in various fields. Symbolic representations in the form of rule sets are one way to illustrate their behavior as a whole, as well as the hidden concepts they model in the intermediate layers. The main contribution of the paper is to demonstrate how to facilitate rule extraction from a deep neural network by retraining it in order to encourage sparseness in the weight matrices and make the hidden units be either maximally or minimally active. Instead of using datasets which combine the attributes in an unclear manner, we show the effectiveness of the methods on the task of reconstructing predefined Boolean concepts so it can later be assessed to what degree the patterns were captured in the rule sets. The evaluation shows that reducing the connectivity of the network in such a way significantly assists later rule extraction, and that when the neurons are either minimally or maximally active it suffices to consider one threshold per hidden unit.
机译:深度神经网络是准确的预测器,但是其决策难以解释,这限制了它们在各个领域的适用性。规则集形式的符号表示是整体说明其行为以及在中间层中建模的隐藏概念的一种方式。本文的主要贡献是演示如何通过重新训练来促进从深度神经网络中提取规则,以鼓励权重矩阵的稀疏性并使隐藏的单元处于最大或最小活跃状态。我们没有使用以不清楚的方式组合属性的数据集,而是展示了这些方法在重构预定义布尔概念的任务上的有效性,因此可以在以后评估在规则集中捕获模式的程度。评估表明,以这种方式降低网络的连通性可显着帮助以后的规则提取,并且当神经元处于最小或最大活动状态时,每个隐藏单元只需考虑一个阈值即可。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号