首页> 外文期刊>Philosophical Transactions of the Royal Society of London, Series B. Biological Sciences >Integration of new information in memory: new insights from a complementary learning systems perspective
【24h】

Integration of new information in memory: new insights from a complementary learning systems perspective

机译:集成内存中的新信息:互补学习系统的新见解视角

获取原文
获取原文并翻译 | 示例
           

摘要

According to complementary learning systems theory, integrating new memories into the neocortex of the brain without interfering with what is already known depends on a gradual learning process, interleaving new items with previously learned items. However, empirical studies show that information consistent with prior knowledge can sometimes be integrated very quickly. We use artificial neural networks with properties like those we attribute to the neocortex to develop an understanding of the role of consistency with prior knowledge in putatively neocortex-like learning systems, providing new insights into when integration will be fast or slow and how integration might be made more efficient when the items to be learned are hierarchically structured. The work relies on deep linear networks that capture the qualitative aspects of the learning dynamics of the more complex nonlinear networks used in previous work. The time course of learning in these networks can be linked to the hierarchical structure in the training data, captured mathematically as a set of dimensions that correspond to the branches in the hierarchy. In this context, a new item to be learned can be characterized as having aspects that project onto previously known dimensions, and others that require adding a new branch/dimension. The projection onto the known dimensions can be learned rapidly without interleaving, but learning the new dimension requires gradual interleaved learning. When a new item only overlaps with items within one branch of a hierarchy, interleaving can focus on the previously known items within this branch, resulting in faster integration with less interleaving overall. The discussion considers how the brain might exploit these facts to make learning more efficient and highlights predictions about what aspects of new information might be hard or easy to learn.
机译:根据互补的学习系统理论,将新的记忆集成到大脑的Neocortex中而不干扰已知的内容取决于逐渐学习过程,将新项目交错与先前学习的项目。然而,实证研究表明,与现有知识一致的信息有时可以非常快速地集成。我们使用人工神经网络具有像那些归属于Neocortex的属性,以了解对借助于新的新的学习系统的先前知识的一致性的理解,为集成时快速或缓慢以及如何集成提供新的洞察力以及如何集成当要学习的项目进行分层结构时,使更有效。这项工作依赖于深度线性网络,捕获了以前工作中使用的更复杂的非线性网络的学习动态的定性方面。这些网络中的学习时间可以与训练数据中的分层结构相关联,以数学捕获,作为对应于层次结构中的分支的一组维度。在此上下文中,待学习的新项目可以表征为将项目的方面投入到先前已知的维度以及需要添加新分支/维度的其他方面。可以快速地学习在已知尺寸上的投影而没有交织,但学习新维度需要逐渐交错的学习。当新项目仅与层次结构的一个分支中的项目重叠时,交织可以专注于该分支中的先前已知的项目,从而导致整体较少的交织更快地集成。讨论考虑大脑如何利用这些事实,使学习更加高效,并突出关于新信息的各个方面的预测可能是难以漫步的或容易学习的预测。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号