...
首页> 外文期刊>Multimedia Tools and Applications >Multi-modal egocentric activity recognition using multi-kernel learning
【24h】

Multi-modal egocentric activity recognition using multi-kernel learning

机译:使用多核学习的多模态自选心性活动识别

获取原文
获取原文并翻译 | 示例
           

摘要

Existing methods for egocentric activity recognition are mostly based on extracting motion characteristics from videos. On the other hand, ubiquity of wearable sensors allow acquisition of information from different sources. Although the increase in sensor diversity brings out the need for adaptive fusion, most of the studies use pre-determined weights for each source. In addition, there are a limited number of studies making use of optical, audio and wearable sensors. In this work, we propose a new framework that adaptively weighs the visual, audio and sensor features in relation to their discriminative abilities. For that purpose, multi-kernel learning (MKL) is used to fuse multi-modal features where the feature and kernel selection/weighing and recognition tasks are performed concurrently. Audio-visual information is used in association with the data acquired from wearable sensors since they hold information on different aspects of activities and help building better models. The proposed framework can be used with different modalities to improve the recognition accuracy and easily be extended with additional sensors. The results show that using multi-modal features with MKL outperforms the existing methods.
机译:Enocentric活动识别的现有方法主要基于从视频中提取运动特征。另一方面,可穿戴传感器的无处不在允许从不同来源获取信息。虽然传感器多样性的增加带来了对自适应融合的需要,但大多数研究都使用每个来源的预先确定的重量。此外,使用光学,音频和可穿戴传感器的研究数量有限。在这项工作中,我们提出了一种新的框架,可自适应地重视视觉,音频和传感器的特征与其辨别能力相关。为此目的,多核学习(MKL)用于熔断多模态特征,其中同时执行特征和内核选择/称重和识别任务。音频视觉信息与从可穿戴传感器获取的数据相关联地使用,因为它们持有关于活动的不同方面的信息并帮助构建更好的模型。所提出的框架可以与不同的方式一起使用,以提高识别精度,并且随着额外的传感器容易延伸。结果表明,使用具有MKL的多模态特征优于现有方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号