首页> 外文期刊>IEEE Transactions on Information Theory >Global Guarantees for Enforcing Deep Generative Priors by Empirical Risk
【24h】

Global Guarantees for Enforcing Deep Generative Priors by Empirical Risk

机译:通过经验风险执行深度生成先验的全球保证

获取原文
获取原文并翻译 | 示例
           

摘要

We examine the theoretical properties of enforcing priors provided by generative deep neural networks via empirical risk minimization. In particular we consider two models, one in which the task is to invert a generative neural network given access to its last layer and another in which the task is to invert a generative neural network given only compressive linear observations of its last layer. We establish that in both cases, in suitable regimes of network layer sizes and a randomness assumption on the network weights, that the non-convex objective function given by empirical risk minimization does not have any spurious stationary points. That is, we establish that with high probability, at any point away from small neighborhoods around two scalar multiples of the desired solution, there is a descent direction. Hence, there are no local minima, saddle points, or other stationary points outside these neighborhoods. These results constitute the first theoretical guarantees which establish the favorable global geometry of these non-convex optimization problems, and they bridge the gap between the empirical success of enforcing deep generative priors and a rigorous understanding of non-linear inverse problems.
机译:我们通过经验风险最小化研究了由生成型深度神经网络提供的执行先验的理论特性。特别地,我们考虑两个模型,其中一个模型的任务是在给定其最后一层的访问权的情况下反转生成神经网络,而另一模型的任务是在仅对​​其最后一层进行压缩线性观察的情况下对生成的神经网络进行反转。我们确定,在两种情况下,在合适的网络层大小和网络权重的随机假设下,经验风险最小化给出的非凸目标函数都没有任何虚假的平稳点。也就是说,我们确定,在所需解的两个标量倍数附近的小邻域之外的任何点处,都有下降的方向。因此,在这些邻域之外没有局部极小值,鞍点或其他固定点。这些结果构成了为这些非凸优化问题建立有利的整体几何结构的第一个理论保证,并且它们在执行深度生成先验的经验成功与对非线性反问题的严格理解之间架起了桥梁。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号