首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Deep Networks are Effective Encoders of Periodicity
【24h】

Deep Networks are Effective Encoders of Periodicity

机译:深度网络是周期性的有效编码器

获取原文
获取原文并翻译 | 示例
           

摘要

We present a comparative theoretical analysis of representation in artificial neural networks with two extreme architectures, a shallow wide network and a deep narrow network, devised to maximally decouple their representative power due to layer width and network depth. We show that, given a specific activation function, models with comparable VC-dimension are required to guarantee zero error modeling of real functions over a binary input. However, functions that exhibit repeating patterns can be encoded much more efficiently in the deep representation, resulting in significant reduction in complexity. This paper provides some initial theoretical evidence of when and how depth can be extremely effective.
机译:我们对具有两种极端架构的浅层网络和深层狭窄网络的人工神经网络中的表示形式进行了比较理论分析,旨在将它们的代表功率由于层宽和网络深度而最大化地分离。我们表明,给定特定的激活函数,需要具有可比VC维的模型来保证二进制输入上真实函数的零误差建模。但是,可以在深度表示中更有效地编码显示重复模式的功能,从而显着降低复杂性。本文提供了一些初步的理论证据,证明深度何时以及如何能够非常有效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号