...
首页> 外文期刊>Neural processing letters >Information-Theoretic Competitive Learning with Inverse Euclidean Distance Output Units
【24h】

Information-Theoretic Competitive Learning with Inverse Euclidean Distance Output Units

机译:逆欧氏距离输出单元的信息理论竞争学习

获取原文
获取原文并翻译 | 示例
           

摘要

In this paper, we propose a new information theoretic competitive learning method. We first construct a learning method in single-layered networks, and then we extend it to supervised multi-layered networks. Competitive unit outputs are computed by the inverse of Euclidean distance between input patterns and connection weights. As distance is smaller, competitive unit outputs are stronger. In realizing competition, neither the winner-take-all algorithm nor the lateral inhibition is used. Instead, the new method is based upon mutual information maximization between input patterns and competitive units. In maximizing mutual information, the entropy of competitive units is increased as much as possible. This means that all competitive units must equally be used in our framework. Thus, no underutilized neurons or dead neurons are generated. When using multi-layered networks, we can improve noise-tolerance performance by unifying information maximization and minimization. We applied our method with single-layered networks to a simple artificial data problem and an actual road classification problem. In both cases, experimental results confirmed that the new method can produce the final solutions almost independently of initial conditions, and classification performance is significantly improved. Then, we used multi-layered networks, and applied them to a character recognition problem and a political data analysis. In these problem, we could show that noise-tolerance performance was improved by decreasing information content on input patterns to certain points.
机译:本文提出了一种新的信息理论竞争学习方法。我们首先在单层网络中构造一种学习方法,然后将其扩展到受监管的多层网络。有竞争力的单位输出是通过输入模式与连接权重之间的欧几里德距离的倒数来计算的。距离越小,竞争性单位产出越强。在实现比赛中,既不使用赢家通吃的算法也不使用横向约束。取而代之的是,新方法基于输入模式和竞争单位之间的相互信息最大化。在最大化相互信息时,竞争单位的熵会尽可能增加。这意味着在我们的框架中必须平等地使用所有竞争性部门。因此,不会产生未充分利用的神经元或死亡的神经元。当使用多层网络时,我们可以通过统一信息最大化和最小化来提高噪声容忍性能。我们将我们的方法与单层网络一起应用于简单的人工数据问题和实际的道路分类问题。在这两种情况下,实验结果均证实了该新方法几乎可以独立于初始条件生成最终溶液,并且分类性能得到了显着提高。然后,我们使用了多层网络,并将其应用于字符识别问题和政治数据分析。在这些问题中,我们可以证明,通过将输入模式中的信息内容减少到特定点可以提高耐噪性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号