首页> 外文会议>Conference on Stereoscopic Displays and Virtual Reality Systems >Predictive coding of depth images across multiple views
【24h】

Predictive coding of depth images across multiple views

机译:在多个视图中预测深度图像的编码

获取原文

摘要

A 3D video stream is typically obtained from a set of synchronized cameras, which are simultaneously capturing the same scene (multiview video). This technology enables applications such as free-viewpoint video which allows the viewer to select his preferred viewpoint, or 3D TV where the depth of the scene can be perceived using a special display. Because the user-selected view does not always correspond to a camera position, it may be necessary to synthesize a virtual camera view. To synthesize such a virtual view, we have adopted a depth image-based rendering technique that employs one depth map for each camera. Consequently, a remote rendering of the 3D video requires a compression technique for texture and depth data. This paper presents a predictivecoding algorithm for the compression of depth images across multiple views. The presented algorithm provides (a) an improved coding efficiency for depth images over block-based motion-compensation encoders (H.264), and (b), a random access to different views for fast rendering. The proposed depth-prediction technique works by synthesizing/computing the depth of 3D points based on the reference depth image. The attractiveness of the depth-prediction algorithm is that the prediction of depth data avoids an independent transmission of depth for each view, while simplifying the view interpolation by synthesizing depth images for arbitrary view points. We present experimental results for several multiview depth sequences, that result in a quality improvement of up to 1.8 dB as compared to H.264 compression.
机译:3D视频流通常从一组同步的摄像机,其被同时捕获同一场景(多视图视频)获得。该技术使应用,如自由视点视频,其允许观看者选择他优选的观点出发,或3D电视,其中场景的深度可以使用一个特殊的显示被感知。因为用户选择的视图不总是对应于摄像机位置,它可能是需要合成的虚拟摄像机视图。合成这样的虚拟视图,我们采用的是采用用于每个摄像机一个深度图中的基于深度图像的着色技术。因此,3D视频的远程渲染需要纹理和深度数据的压缩技术。本文提出了一种predictivecoding算法跨多个视图的深度图像的压缩。该算法提供了用于深度图像上基于块的运动补偿编码器(a)中的改进的编码效率(H.264),和(b),随机访问不同的视图用于快速渲染。所提出的深度预测技术的工作原理是合成/计算的3D点基于的参考深度图像的深度。深度预测算法的吸引力是,深度数据的预测避免深度为每个视图的一个独立的传输,同时通过合成深度图像对任意视点简化视图内插。我们几个多视点深度序列目前的实验结果,该结果在高达1.8分贝质量的改善相比,H.264压缩。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号