首页> 外文期刊>Connection Science >Self-refreshing memory in artificial neural networks: learning temporal sequences without catastrophic forgetting
【24h】

Self-refreshing memory in artificial neural networks: learning temporal sequences without catastrophic forgetting

机译:人工神经网络中的自我刷新记忆:学习时间序列而不会造成灾难性的遗忘

获取原文
获取原文并翻译 | 示例
           

摘要

While humans forget gradually, highly distributed connectionist networks forget catastrophically: newly learned information often completely erases previously learned information. This is not just implausible cognitively, but disastrous practically. However, it is not easy in connectionist cognitive modelling to keep away from highly distributed neural networks, if only because of their ability to generalize. A realistic and effective system that solves the problem of catastrophic interference in sequential learning of 'static' (i.e. non-temporally ordered) patterns has been proposed recently (Robins 1995, Connection Science, 7: 123-146, 1996, Connection Science, 8: 259-275, Ans and Rousset 1997, CR Academie des Sciences Paris, Life Sciences, 320: 989-997, French 1997, Connection Science, 9: 353-379, 1999, Trends in Cognitive Sciences, 3: 128-135, Ans and Rousset 2000, Connection Science, 12: 1-19). The basic principle is to learn new external patterns interleaved with internally generated 'pseudopatterns' (generated from random activation) that reflect the previously learned information. However, to be credible, this self-refreshing mechanism for static learning has to encompass our human ability to learn serially many temporal sequences of patterns without catastrophic forgetting. Temporal sequence learning is arguably more important than static pattern learning in the real world. In this paper, we develop a dual-network architecture in which self-generated pseudopatterns reflect (non-temporally) all the sequences of temporally ordered items previously learned. Using these pseudopatterns, several self-refreshing mechanisms that eliminate catastrophic forgetting in sequence learning are described and their efficiency is demonstrated through simulations. Finally, an experiment is presented that evidences a close similarity between human and simulated behaviour.
机译:当人们逐渐忘记时,高度分散的连接主义网络却灾难性地忘记:新近学习的信息通常会完全抹掉先前学习的信息。这不仅在认知上令人难以置信,而且实际上是灾难性的。但是,仅由于它们的泛化能力,在连接主义认知建模中要使其远离高度分布的神经网络并不容易。最近提出了一种现实有效的系统,该系统解决了“静态”(即非临时有序)模式的顺序学习中的灾难性干扰问题(Robins 1995,Connection Science,7:123-146,1996,Connection Science,8) :259-275,Ans and Rousset 1997,巴黎CR科学研究院,生命科学,320:989-997,法语1997,Connection Science,9:353-379,1999,认知科学趋势,3:128-135, Ans and Rousset 2000,Connection Science,12:1-19)。基本原理是学习与内部生成的“伪模式”(从随机激活生成的)交错的新外部模式,这些“伪模式”反映了先前学习的信息。但是,为了可靠,这种用于静态学习的自我刷新机制必须包含我们人类连续学习许多模式的时间序列而不会造成灾难性遗忘的能力。在现实世界中,时间序列学习可以说比静态模式学习更重要。在本文中,我们开发了一种双网络体系结构,其中自生成的伪模式(非临时)反映了先前学习的所有时间顺序项序列。使用这些伪模式,描述了消除序列学习中的灾难性遗忘的几种自刷新机制,并通过仿真证明了它们的效率。最后,提出了一个实验,证明了人类行为与模拟行为之间的相似性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号