首页> 外文会议>Asia-Pacific Signal and Information Processing Association Annual Summit and Conference >No-Reference Video Quality Assessment based on Convolutional Neural Network and Human Temporal Behavior
【24h】

No-Reference Video Quality Assessment based on Convolutional Neural Network and Human Temporal Behavior

机译:基于卷积神经网络和人类时间行为的无参考视频质量评估

获取原文

摘要

The high performance video quality assessment (VQA) algorithm is a necessary skill to provide high quality video to viewers. However, since the nonlinear perception function between the distortion level of the video and the subjective quality score is not precisely defined, there are many limitations in accurately predicting the quality of the video. In this paper, we propose a deep learning scheme named Deep Blind Video Quality Assessment to achieve a more accurate and reliable video quality predictor by considering various spatial and temporal cues which have not been considered before. We used CNN to extract the spatial cues of each video in VQA and proposed new hand-crafted features for temporal cues. Performance experiments show that performance is better than other state-of-the-art no-reference (NR) VQA models and the introduction of hand-crafted temporal features is very efficient in VQA.
机译:高性能视频质量评估(VQA)算法是向观众提供高质量视频的必要技能。但是,由于未精确定义视频的失真水平与主观质量得分之间的非线性感知函数,因此在准确预测视频的质量方面存在许多限制。在本文中,我们提出了一种名为“深盲视频质量评估”的深度学习方案,以通过考虑之前未考虑的各种时空线索来实现更准确和可靠的视频质量预测指标。我们使用CNN提取VQA中每个视频的空间提示,并为时间提示提出了新的手工功能。性能实验表明,性能优于其他最新的无参考(NR)VQA模型,并且在VQA中非常有效地引入了手工制作的时间特征。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号