首页> 外文会议>International Conference on Artificial Neural Networks >Embedding Complexity of Learned Representations in Neural Networks
【24h】

Embedding Complexity of Learned Representations in Neural Networks

机译:在神经网络中嵌入学习表示的复杂性

获取原文

摘要

In classification tasks, the set of training examples for each class can be viewed as a limited sampling from an ideal infinite manifold of all sensible representants of this class. A layered artificial neural network model trained for such a task can then be interpreted as a stack of continuous transformations which gradually mold these complex manifolds from the original input space to simpler dissimilar internal representations on successive hidden layers - the so-called manifold disentagle-ment hypothesis. This, in turn, enables the final classification to be made in a linear fashion. We propose to assess the extent of this separation effect by introducing a class of measures based on the embedding complexity of the internal representations, with evaluation of the KL-divergence of t-distributed stochastic neighbour embedding (t-SNE) appearing as the most suitable method. Finally, we demonstrate the validity of the disentanglement hypothesis by measuring embedding complexity, classification accuracy and their relation on a sample of image classification datasets.
机译:在分类任务中,每个班级的训练示例集可以视为来自该班级所有明智代表的理想无限流形的有限采样。然后,可以将经过训练的分层人工神经网络模型解释为一堆连续的转换,这些转换将这些复杂的流形从原始输入空间逐步塑造为连续的隐藏层上的更简单的不相似内部表示形式-所谓的流形分解。假设。反过来,这使得能够以线性方式进行最终分类。我们建议通过基于内部表示的嵌入复杂度引入一类措施来评估这种分离效应的程度,并评估t分布随机邻居嵌入(t-SNE)的KL散度似乎是最合适的方法。最后,我们通过测量图像分类数据集样本上的嵌入复杂度,分类准确性及其关系,证明了纠缠假设的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号