首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Transductive Zero-Shot Action Recognition via Visually Connected Graph Convolutional Networks
【24h】

Transductive Zero-Shot Action Recognition via Visually Connected Graph Convolutional Networks

机译:通过视觉连接的图形卷积网络进行转换零射击动作识别

获取原文
获取原文并翻译 | 示例
           

摘要

With the explosive growth of action categories, zero-shot action recognition aims to extend a well-trained model to novel/unseen classes. To bridge the large knowledge gap between seen and unseen classes, in this brief, we visually associate unseen actions with seen categories in a visually connected graph, and the knowledge is then transferred from the visual features space to semantic space via the grouped attention graph convolutional networks (GAGCNs). In particular, we extract visual features for all the actions, and a visually connected graph is built to attach seen actions to visually similar unseen categories. Moreover, the proposed grouped attention mechanism exploits the hierarchical knowledge in the graph so that the GAGCN enables propagating the visual-semantic connections from seen actions to unseen ones. We extensively evaluate the proposed method on three data sets: HMDB51, UCF101, and NTU RGB + D. Experimental results show that the GAGCN outperforms state-of-the-art methods.
机译:随着行动类别的爆炸性增长,零射击动作识别旨在将训练有素的模型扩展到小说/看不见的课程。为了弥合所见和看不见的课程之间的大知识差距,在此简介中,我们在视觉上将看不见的行为与视觉连接图中的看法相关联,然后通过分组的注意力图卷积从视觉特征空间转移到语义空间的知识。网络(GAGCNS)。特别是,我们提取所有操作的可视特征,并且建立了视觉上连接的图形,以将看到的动作附加到视觉上类似的未操作类别。此外,所提出的分组注意力机制利用图中的分层知识,使得GAGCN使得从看法传播到看不见者的视觉语义连接。我们在三个数据集中广泛评估了所提出的方法:HMDB51,UCF101和NTU RGB + D.实验结果表明,GAGCN优于最先进的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号