【2h】

Overcoming catastrophic forgetting in neural networks

机译:克服神经网络中的灾难性遗忘

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.
机译:以顺序方式学习任务的能力对于人工智能的发展至关重要。直到现在,神经网络还无法做到这一点,并且人们广泛认为,灾难性遗忘是连接主义模型的必然特征。我们证明,有可能克服这种局限性并训练网络,这些网络可以维持他们长期未经历的任务的专业知识。我们的方法通过选择性地减慢对那些任务重要的权重的学习来记住旧任务。通过解决一组基于手写数字数据集的分类任务,并通过顺序学习几个Atari 2600游戏,我们证明了我们的方法是可扩展且有效的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号