...
首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Interaction Matters: A Note on Non-asymptotic Local Convergence of Generative Adversarial Networks
【24h】

Interaction Matters: A Note on Non-asymptotic Local Convergence of Generative Adversarial Networks

机译:互动事项:关于生成对冲网络的非渐近局部收敛的注意事项

获取原文
           

摘要

Motivated by the pursuit of a systematic computational and algorithmic understanding of Generative Adversarial Networks (GANs), we present a simple yet unified non-asymptotic local convergence theory for smooth two-player games, which subsumes several discrete-time gradient-based saddle point dynamics. The analysis reveals the surprising nature of the off-diagonal interaction term as both a blessing and a curse. On the one hand, this interaction term explains the origin of the slow-down effect in the convergence of Simultaneous Gradient Ascent (SGA) to stable Nash equilibria. On the other hand, for the unstable equilibria, exponential convergence can be proved thanks to the interaction term, for four modified dynamics proposed to stabilize GAN training: Optimistic Mirror Descent (OMD), Consensus Optimization (CO), Implicit Updates (IU) and Predictive Method (PM). The analysis uncovers the intimate connections among these stabilizing techniques, and provides detailed characterization on the choice of learning rate. As a by-product, we present a new analysis for OMD proposed in Daskalakis, Ilyas, Syrgkanis, and Zeng [2017] with improved rates.
机译:追求对生成的对抗网络(GANS)的系统计算和算法理解的动机,我们为平滑双人游戏提供了一种简单但统一的非渐近本地融合理论,其归入了几个基于梯度的鞍点动态。分析揭示了非对角线互动项的令人惊讶的性质,如祝福和诅咒。一方面,这种相互作用术语解释了在同时梯度上升(SGA)收敛到稳定的纳什均衡的恢复效果的起源。另一方面,对于不稳定的均衡,可以证明对指数收敛性的互动项,对于稳定GaN培训的四种修改动态,可以证明四种修改动态:乐观镜血液(OMD),共识优化(CO),隐式更新(IU)和预测方法(PM)。分析揭示了这些稳定技术之间的密切联系,并提供了关于学习率选择的详细表征。作为副产品,我们为达斯巴拉基斯,伊利河,Syrgkanis和曾[2017]提出的OMD提出了一个新的分析,提高了率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号