首页> 外文期刊>Applied optics >Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit
【24h】

Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit

机译:使用图形处理单元同时重建多个深度图像而没有离焦点

获取原文
获取原文并翻译 | 示例
           

摘要

The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.
机译:在三维(3D)计算积分成像中使用射线反向传播算法重建多个深度图像非常麻烦。此外,重建的深度图像包括焦点和离焦区域。焦点区域是对象表面上位于重构深度的3D点,而离焦区域包括自由空间中的3D点,这些点不属于3D空间中的任何对象表面。通常,如果不将其移除,则散焦区域的存在会对3D对象的高级分析(包括其分类,识别和跟踪)产生不利影响。在这里,我们使用图形处理单元(GPU),该图形处理单元支持与多个处理器并行处理,以使用查找表同时重建多个深度图像,该查找表包含给定深度范围内每个基本图像沿x和y方向的偏移值。此外,深度图像上的每个3D点都可以通过分析其统计方差及其对应的样本来测量,这些样本由二维(2D)元素图像捕获。这些统计方差可用于将深度图像像素分类为焦点或离焦点。在此阶段,还可以在GPU上并行实现多个深度图像中焦点和离焦点的测量。我们提出的方法是基于以下假设进行的:在整体成像过程的捕获阶段不存在3D对象的遮挡。实验结果表明,该方法能够去除重建深度图像中的离焦点。结果还表明,与使用CPU相比,使用GPU消除离焦点可以大大提高总体计算速度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号