首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Topology of Learning in Feedforward Neural Networks
【24h】

Topology of Learning in Feedforward Neural Networks

机译:前馈神经网络学习的拓扑

获取原文
获取原文并翻译 | 示例
           

摘要

Understanding how neural networks learn remains one of the central challenges in machine learning research. From random at the start of training, the weights of a neural network evolve in such a way as to be able to perform a variety of tasks, such as classifying images. Here, we study the emergence of structure in the weights by applying methods from topological data analysis. We train simple feedforward neural networks on the MNIST data set and monitor the evolution of the weights. When initialized to zero, the weights follow trajectories that branch off recurrently, thus generating trees that describe the growth of the effective capacity of each layer. When initialized to tiny random values, the weights evolve smoothly along 2-D surfaces. We show that natural coordinates on these learning surfaces correspond to important factors of variation.
机译:了解神经网络如何学习如何仍然是机器学习研究中的中央挑战之一。 从训练开始时从随机开始,神经网络的权重以能够执行各种任务的方式发展,例如分类图像。 在这里,我们通过应用来自拓扑数据分析的方法来研究体重中的结构的出现。 我们在Mnist数据集上培训简单的前馈神经网络并监控权重的演变。 当初始化为零时,重量遵循分支频繁分支的轨迹,从而产生描述每层有效容量的生长的树木。 当初始化为微小的随机值时,权重沿2-D曲面平滑地演变。 我们表明,这些学习表面上的自然坐标对应于变异的重要因素。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号