首页> 外文期刊>IEEE transactions on multimedia >Robust Semi-Automatic Depth Map Generation in Unconstrained Images and Video Sequences for 2D to Stereoscopic 3D Conversion
【24h】

Robust Semi-Automatic Depth Map Generation in Unconstrained Images and Video Sequences for 2D to Stereoscopic 3D Conversion

机译:在不受约束的图像和视频序列中实现从2D到立体3D转换的强大的半自动深度图生成

获取原文
获取原文并翻译 | 示例
           

摘要

We describe a system for robustly estimating synthetic depth maps in unconstrained images and videos, for semi-automatic conversion into stereoscopic 3D. Currently, this process is automatic or done manually by rotoscopers. Automatic is the least labor intensive, but makes user intervention or error correction difficult. Manual is the most accurate, but time consuming and costly. Noting the merits of both, a semi-automatic method blends them together, allowing for faster and accurate conversion. This requires user-defined strokes on the image, or over several keyframes for video, corresponding to a rough estimate of the depths. After, the rest of the depths are determined, creating depth maps to generate stereoscopic 3D content, with Depth Image Based Rendering to generate the artificial views. Depth map estimation can be considered as a multi-label segmentation problem: each class is a depth. For video, we allow the user to label only the first frame, and we propagate the strokes using computer vision techniques. We combine the merits of two well-respected segmentation algorithms: Graph Cuts and Random Walks. The diffusion from Random Walks, with the edge preserving of Graph Cuts should give good results. We generate good quality content, more suitable for perception, compared to a similar framework.
机译:我们描述了一种用于在无约束的图像和视频中稳健地估计合成深度图,半自动转换为立体3D的系统。当前,该过程是自动的,或由旋转镜检查员手动完成。自动是最省力的,但是使用户干预或错误纠正变得困难。手动是最准确的方法,但既耗时又昂贵。注意到两者的优点,半自动方法将它们融合在一起,可以更快,更准确地进行转换。这需要在图像上或在视频的几个关键帧上使用用户定义的笔划,这对应于深度的粗略估计。之后,确定其余深度,创建深度图以生成立体3D内容,并使用基于深度图像的渲染生成人工视图。深度图估计可以视为多标签分割问题:每个类别都是一个深度。对于视频,我们只允许用户标记第一帧,然后使用计算机视觉技术传播笔画。我们结合了两种受人尊敬的分割算法的优点:图割和随机游走。随机游走的扩散,以及保留图割的边缘,应该会产生良好的结果。与类似的框架相比,我们生成了高质量的内容,更适合感知。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号