首页> 外文期刊>PLoS Computational Biology >Neural Modularity Helps Organisms Evolve to Learn New Skills without Forgetting Old Skills
【24h】

Neural Modularity Helps Organisms Evolve to Learn New Skills without Forgetting Old Skills

机译:神经模块化有助于有机体在毫无遗忘的技能而不是学习新技能

获取原文
           

摘要

A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand). To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1) that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2) that one benefit of the modularity ubiquitous in the brains of natural animals might be to alleviate the problem of catastrophic forgetting.
机译:人工智能的长期目标是创造可以学习各种不同问题的各种不同技能的药剂。在神经网络的人工智能子领域,这个目标的障碍是,当代理学习新的技能时,他们通常通过失去先前获得的技能来这样做,这是一个称为灾难性的遗忘的问题。发生这种情况因为,要学习新的任务,神经学习算法改变了编码先前获取的技能的连接。网络如何组织批判性地影响他们的学习动态。在本文中,我们测试是否可以通过演化模块化神经网络来降低灾难性遗忘。模块化直观地应通过将功能分离成物理上不同的模块来减少任务之间的学习干扰,在其中可以选择性地打开或关闭学习。通过从感觉处理模块分开的加强学习模块,模块化可以进一步改善学习,允许学习仅响应于正或负奖励而发生。在本文中,通过神经调节进行学习,这允许代理基于环境刺激选择性地改变每个神经连接的学习速率(例如,根据手头的任务改变特定位置的学习)。为了产生模块化,我们通过一种神经连接成本发展神经网络。我们表明,这种连接成本技术会导致模块化,确认先前的结果,并且如此稀疏连接,模块化网络的整体性能更高,因为它们更快地学习新技能,同时保留旧技能,因为它们具有单独的加强学习模块。我们的结果建议(1)鼓励神经网络中的模块化可能会帮助我们克服无法学习新技能的网络的长期屏障,而不会忘记旧技能,(2)在自然动物大脑中普遍存在的模块化的一个好处可能是为了减轻灾难性遗忘的问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号