...
首页> 外文期刊>International journal of communication systems >Web video classification with visual and contextual semantics
【24h】

Web video classification with visual and contextual semantics

机译:具有视觉和上下文语义的网络视频分类

获取原文
获取原文并翻译 | 示例
           

摘要

On the social Web, the amount of video content either originated from wireless devices or previously received from media servers has increased enormously in the recent years. The astounding growth of Web videos has stimulated researchers to propose new strategies to organize them into their respective categories. Because of complex ontology and large variation in content and quality of Web videos, it is difficult to get sufficient, precisely labeled training data, which causes hindrance in automatic video classification. In this paper, we propose a novel content- and context-based Web video classification framework by rendering external support through category discriminative terms (CDTs) and semantic relatedness measure (SRM). Mainly, a three-step framework is proposed. Firstly, content-based video classification is proposed, where twofold use of high-level concept detectors is leveraged to classify Web videos. Initially, category classifiers induced from VIREO-374 detectors are trained to classify Web videos, and then concept detectors with high confidence for each video are mapped to CDT through SRM-assisted semantic content fusion function to further boost the category classifiers, which intuitively provide a more robust measure for Web video classification. Secondly, a context-based video classification is proposed, where twofold use of contextual information is also harnessed. Initially, cosine similarity and then semantic similarity are measured between text features of each video and CDT through vector space model (VSM)- and SRM-assisted semantic context fusion function, respectively. Finally, classification results from content and context are fused to compensate for the shortcomings of each other, which enhance the video classification performance. Experiments on large-scale video dataset validate the effectiveness of the proposed solution.
机译:近年来,在社交网络上,源自无线设备或先前从媒体服务器接收的视频内容数量已大大增加。网络视频的惊人增长刺激了研究人员提出新的策略,将其分为各自的类别。由于复杂的本体以及Web视频的内容和质量的巨大差异,很难获得足够的,经过精确标记的训练数据,这会阻碍自动视频分类。在本文中,我们通过类别区分性术语(CDT)和语义相关性度量(SRM)提供外部支持,从而提出了一种新颖的基于内容和上下文的Web视频分类框架。主要提出了一个三步框架。首先,提出了基于内容的视频分类,其中利用了高级概念检测器的双重使用来对Web视频进行分类。最初,对VIREO-374检测器产生的类别分类器进行训练以对Web视频进行分类,然后通过SRM辅助语义内容融合功能将每个视频的高置信度概念检测器映射到CDT,以进一步增强类别分类器,从而直观地提供分类器。用于网络视频分类的更强大的度量。其次,提出了基于上下文的视频分类,其中还利用了上下文信息的双重使用。最初,分别通过向量空间模型(VSM)和SRM辅助的语义上下文融合功能来测量每个视频和CDT的文本特征之间的余弦相似度,然后进行语义相似度。最后,融合来自内容和上下文的分类结果以弥补彼此的缺点,从而增强了视频分类性能。在大规模视频数据集上的实验验证了所提解决方案的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号