首页> 外文学位 >Motion segmentation from clustering of sparse point features using spatially constrained mixture models.
【24h】

Motion segmentation from clustering of sparse point features using spatially constrained mixture models.

机译:使用空间受限的混合模型,基于稀疏点特征的聚类进行运动分割。

获取原文
获取原文并翻译 | 示例

摘要

Motion is one of the strongest cues available for segmentation. While motion segmentation finds wide ranging applications in object detection, tracking, surveillance, robotics, image and video compression, scene reconstruction, video editing, and so on, it faces various challenges such as accurate motion recovery from noisy data, varying complexity of the models required to describe the computed image motion, the dynamic nature of the scene that may include a large number of independently moving objects undergoing occlusions, and the need to make high-level decisions while dealing with long image sequences. Keeping the sparse point features as the pivotal point, this thesis presents three distinct approaches that address some of the above mentioned motion segmentation challenges.;The first part deals with the detection and tracking of sparse point features in image sequences. A framework is proposed where point features can be tracked jointly. Traditionally, sparse features have been tracked independently of one another. Combining the ideas from Lucas-Kanade and Horn-Schunck, this thesis presents a technique in which the estimated motion of a feature is influenced by the motion of the neighboring features. The joint feature tracking algorithm leads to an improved tracking performance over the standard Lucas-Kanade based tracking approach, especially while tracking features in untextured regions.;The second part is related to motion segmentation using sparse point feature trajectories. The approach utilizes a spatially constrained mixturemodel framework and a greedy EM algorithm to group point features. In contrast to previous work, the algorithm is incremental in nature and allows for an arbitrary number of objects traveling at different relative speeds to be segmented, thus eliminating the need for an explicit initialization of the number of groups. The primary parameter used by the algorithm is the amount of evidence that must be accumulated before the features are grouped. A statistical goodness-of-fit test monitors the change in the motion parameters of a group over time in order to automatically update the reference frame. The approach works in real time and is able to segment various challenging sequences captured from still and moving cameras that contain multiple independently moving objects and motion blur.;The third part of this thesis deals with the use of specialized models for motion segmentation. The articulated human motion is chosen as a representative example that requires a complex model to be accurately described. A motion-based approach for segmentation, tracking, and pose estimation of articulated bodies is presented. The human body is represented using the trajectories of a number of sparse points. A novel motion descriptor encodes the spatial relationships of the motion vectors representing various parts of the person and can discriminate between articulated and non-articulated motions, as well as between various pose and view angles. Furthermore, a nearest neighbor search for the closest motion descriptor from the labeled training data consisting of the human gait cycle in multiple views is performed, and this distance is fed to a Hidden Markov Model defined over multiple poses and viewpoints to obtain temporally consistent pose estimates. Experimental results on various sequences of walking subjects with multiple viewpoints and scale demonstrate the effectiveness of the approach. In particular, the purely motion based approach is able to track people in night-time sequences, even when the appearance based cues are not available.;Finally, an application of image segmentation is presented in the context of iris segmentation. Iris is a widely used biometric for recognition and is known to be highly accurate if the segmentation of the iris region is near perfect. Non-ideal situations arise when the iris undergoes occlusion by eyelashes or eyelids, or the overall quality of the segmented iris is affected by illumination changes, or due to out-of-plane rotation of the eye. The proposed iris segmentation approach combines the appearance and the geometry of the eye to segment iris regions from non-ideal images. The image is modeled as a Markov random field, and a graph cuts based energy minimization algorithm is applied to label the pixels either as eyelashes, pupil, iris, or background using texture and image intensity information. The iris shape is modeled as an ellipse and is used to refine the pixel based segmentation. The results indicate the effectiveness of the segmentation algorithm in handling non-ideal iris images.
机译:运动是可用于细分的最强线索之一。虽然运动分割在对象检测,跟踪,监视,机器人,图像和视频压缩,场景重建,视频编辑等方面拥有广泛的应用,但它面临着各种挑战,例如从嘈杂的数据中准确地恢复运动,模型的复杂性各不相同。需要描述计算的图像运动,场景的动态性质(可能包括大量独立运动的物体进行遮挡)以及在处理长图像序列时做出高级决策的需求。本文以稀疏点特征为关键点,提出了三种截然不同的方法来解决上述运动分割难题。第一部分研究图像序列中稀疏点特征的检测和跟踪。提出了一个可以共同跟踪点特征的框架。传统上,稀疏特征是相互独立跟踪的。结合Lucas-Kanade和Horn-Schunck的思想,本论文提出了一种技术,其中估计的特征运动会受到相邻特征运动的影响。联合特征跟踪算法比基于Lucas-Kanade的标准跟踪方法提高了跟踪性能,尤其是在无纹理区域中跟踪特征时。第二部分与使用稀疏点特征轨迹进行运动分割有关。该方法利用空间受限的混合模型框架和贪婪的EM算法对点特征进行分组。与以前的工作相比,该算法本质上是增量算法,可以对以不同相对速度传播的任意数量的对象进行分段,从而无需显式初始化组数。算法使用的主要参数是在对特征进行分组之前必须累积的证据量。统计拟合优度测试监视组的运动参数随时间的变化,以便自动更新参考系。该方法是实时工作的,并且能够对从包含多个独立运动对象和运动模糊的静态和运动相机捕获的各种具有挑战性的序列进行分割。;本论文的第三部分涉及使用专用模型进行运动分割。选择关节运动作为代表示例,该示例需要精确描述复杂模型。提出了一种基于运动的关节体分割,跟踪和姿态估计方法。使用许多稀疏点的轨迹表示人体。新颖的运动描述符对代表人的各个部位的运动矢量的空间关系进行编码,并且可以区分关节运动和非关节运动,以及各种姿势和视角。此外,从包含多个步态的人的步态周期的标记训练数据中执行最近邻居搜索,以获取最接近的运动描述符,并将该距离馈入定义为多个姿势和视点的隐马尔可夫模型,以获得时间上一致的姿势估计。在具有多个观点和规模的步行对象的各种序列上的实验结果证明了该方法的有效性。特别是,即使没有基于外观的提示,基于纯运动的方法也能够在夜间序列中跟踪人。最后,在虹膜分割的背景下提出了图像分割的应用。虹膜是一种广泛使用的生物识别技术,如果虹膜区域的分割接近完美,则虹膜将非常准确。当虹膜受到睫毛或眼睑的遮挡或分割虹膜的整体质量受照明变化或眼睛平面外旋转影响时,会出现非理想情况。所提出的虹膜分割方法结合了眼睛的外观和几何形状,以从非理想图像中分割虹膜区域。图像被建模为马尔可夫随机场,并且基于图割的能量最小化算法被应用于使用纹理和图像强度信息将像素标记为睫毛,瞳孔,虹膜或背景。虹膜形状被建模为椭圆形,并用于完善基于像素的分割。结果表明分割算法在处理非理想虹膜图像中的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号