首页> 外文会议>International Conference on Artificial Neural Networks >Marginal Replay vs Conditional Replay for Continual Learning
【24h】

Marginal Replay vs Conditional Replay for Continual Learning

机译:边际重播与有条件重播进行持续学习

获取原文

摘要

We present a new replay-based method of continual classification learning that we term "conditional replay" which generates samples and labels together by sampling from a distribution conditioned on the class. We compare conditional replay to another replay-based continual learning paradigm (which we term "marginal replay") that generates samples independently of their class and assigns labels in a separate step. The main improvement in conditional replay is that labels for generated samples need not be inferred, which reduces the margin for error in complex continual classification learning tasks. We demonstrate the effectiveness of this approach using novel and standard benchmarks constructed from MN1ST and FashionMNIST data, and compare to the regularization-based elastic weight consolidation (EWC) method [17,34].
机译:我们提出了一种基于重播的连续分类学习新方法,我们称之为“条件重播”,该方法通过从以该类为条件的分布中进行采样来一起生成样本和标签。我们将条件重播与另一个基于重播的持续学习范式(我们称为“边际重播”)进行比较,该范式独立于其类生成样本,并在单独的步骤中分配标签。条件重放的主要改进是不需要推断生成样本的标签,这减少了复杂的连续分类学习任务中的错误余量。我们使用从MN1ST和FashionMNIST数据构建的新颖和标准基准来证明这种方法的有效性,并与基于正则化的弹性权重合并(EWC)方法进行比较[17,34]。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号