首页> 外文会议>International Conference on Machine Learning >NetGAN without GAN: From Random Walks to Low-Rank Approximations
【24h】

NetGAN without GAN: From Random Walks to Low-Rank Approximations

机译:没有GaN的网格:从随机散步到低秩近似

获取原文

摘要

A graph generative model takes a graph as input and is supposed to generate new graphs that "look like" the input graph. While most classical models focus on few, hand-selected graph statistics and are too simplistic to reproduce real-world graphs, NetGAN recently emerged as an attractive alternative: by training a GAN to learn the random walk distribution of the input graph, the algorithm is able to reproduce a large number of important network patterns simultaneously, without explicitly specifying any of them. In this paper, we investigate the implicit bias of NetGAN. We find that the root of its generalization properties does not lie in the GAN architecture, but in an inconspicuous low-rank approximation of the logits random walk transition matrix. Step by step we can strip NetGAN of all unnecessary parts, including the GAN, and obtain a highly simplified reformulation that achieves comparable generalization results, but is orders of magnitudes faster and easier to adapt. Being much simpler on the conceptual side, we reveal the implicit inductive bias of the algorithm - an important step towards increasing the interpretability, transparency and acceptance of machine learning systems.
机译:图形生成模型将图形为输入,并且应该生成“看起来像”输入图的新图形。虽然大多数古典模型专注于少数手,但是删除了真实世界图的少数,但过于简单,但斯特纳最近被出现为一个有吸引力的替代方案:通过培训一个甘甘来学习输入图的随机散步分布,算法是能够同时再现大量重要的网络模式,而无需明确指定它们中的任何一个。在本文中,我们调查了网格的隐含偏差。我们发现其泛化属性的根源不在GaN架构中躺在GaN架构中,而是在Logits随机播放转换矩阵的不起眼的低级近似值中。一步一步一步,我们可以剥离所有不必要的部件,包括GaN,并获得高度简化的重新制定,实现了可比的概括结果,但是大幅度较快,更容易适应。在概念方面更简单,我们揭示了算法的隐含感应偏差 - 旨在提高机器学习系统的解释性,透明度和接受的重要一步。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号