首页> 外文会议>IEEE International Conference on Big Data >Multi-View, Generative, Transfer Learning for Distributed Time Series Classification
【24h】

Multi-View, Generative, Transfer Learning for Distributed Time Series Classification

机译:分布式时间序列分类的多视图,生成式,转移学习

获取原文

摘要

In this paper, we propose an effective, multi-view, generative, transfer learning framework for multivariate time-series data. While generative models are demonstrated effective for several machine learning tasks, their application to time-series classification problems is underexplored. The need for additional exploration is motivated when data are large, annotations are unbalanced or scarce, or data are distributed and fragmented. Recent advances in computer vision attempt to use synthesized samples with system generated annotations to overcome the lack or imbalance of annotated data. However, in multi-view problem settings, view mismatches between the synthetic data and real data pose additional challenges against harnessing new annotated data collections. The proposed method offers important contributions to facilitate knowledge sharing, while simultaneously ensuring an effective solution for domain-specific, finelevel categorizations. We propose a principled way to perform view adaptation in a cross-view learning environment, wherein pairwise view similarity is identified by a smaller subset of source samples that closely resemble the target data patterns. This approach integrates generative models within a deep classification framework to minimize the gap between source and target data. More precisely, we design category specific conditional, generative models to update the source generator in order for transforming source features so that they appear as target features and simultaneously tune the associated discriminative model to distinguish these features. During each learning iteration, the source generator is conditioned by a source training set represented as some target-like features. This transformation in appearance was performed via a target generator specifically learned for target-specific customization per category. Afterward, a smaller source training set, indicating close target pattern resemblance in terms of the corresponding generative and discriminative loss, is used to fine-tune the source classification model parameters. Experiments show that compared to existing approaches, our proposed multiview, generative, transfer learning framework improves timeseries classification performance by around 4% in the UCI multiview activity recognition dataset, while also showing a robust, generalized representation capacity in classifying several large-scale multi-view light curve collections.
机译:在本文中,我们为多元时间序列数据提出了一种有效的,多视图的,生成的,转移学习框架。虽然生成模型已证明对几种机器学习任务有效,但对于时间序列分类问题的应用仍未得到充分研究。当数据量大,注释不平衡或稀缺或数据分布和碎片化时,就会激发对其他探索的需求。计算机视觉的最新进展试图将合成的样本与系统生成的注释一起使用,以克服注释数据的缺失或不平衡。但是,在多视图问题设置中,合成数据与实际数据之间的视图不匹配给利用新的带注释的数据集合带来了额外的挑战。所提出的方法为促进知识共享提供了重要的贡献,同时确保了针对特定领域的精细分类的有效解决方案。我们提出了一种在跨视图学习环境中执行视图适应的原则方法,其中成对视图相似性由与目标数据模式非常相似的源样本的较小子集标识。这种方法将生成模型集成在一个深度分类框架内,以最大程度地减少源数据和目标数据之间的差距。更确切地说,我们设计类别特定的条件生成模型来更新源生成器,以便转换源特征,以便它们以目标特征的形式出现,并同时调整关联的判别模型以区分这些特征。在每次学习迭代期间,源生成器都由表示为某些类似于目标的特征的源训练集来调节。这种外观转换是通过专门针对每个类别的特定于目标的自定义学习的目标生成器执行的。此后,使用较小的源训练集来微调源分类模型参数,该训练集表示在对应的生成损失和判别损失方面接近目标模式。实验表明,与现有方法相比,我们提出的多视图,生成式转移学习框架在UCI多视图活动识别数据集中将时间序列分类性能提高了约4%,同时还显示了在对多个大型多类别活动进行分类时的强大,通用的表示能力查看光曲线集合。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号