首页> 外文学位 >Decomposing Matrices, Tensors, and Images
【24h】

Decomposing Matrices, Tensors, and Images

机译:分解矩阵,张量和图像

获取原文
获取原文并翻译 | 示例

摘要

In this thesis we apply techniques from algebraic geometry to problems arising from optimization and statistics. In particular, we consider data that takes the form of a matrix, a tensor or an image, and we study how to decompose it so as to find additional and seemingly hidden information about its origin and formation. We show that the practical uses of such decompositions are complemented by appealing algebraic and geometric structure.;In Chapter 2 of this thesis we focus on matrix shaped data. The singular value decomposition, which lies at the core of modern algorithms and can be found efficiently, is not always enough to capture the structure of the data. Often times the matrix at hand as well as the elements of its decomposition are required to have a certain positivity structure, and we need to design algorithms and theory to exploit this structure. Statistical mixture models, for instance, are based on finding a nonnegative decomposition of a nonnegative matrix. We study the algebraic and geometric properties of such decompositions in Section 2.1. Another type of decomposition of a nonnegative matrix, which is useful in convex optimization as well as quantum information theory, is positive semidefinite decomposition. Here we require the elements of the decomposition to be positive semidefinite matrices of a given size. We explore this notion in Section 2.2. One of the most appealing properties of a nonnegative matrix is that we can think of it in terms of a pair of nested polyhedra. We rely on this geometric interpretation when studying nonnegative and positive semidefinite decompositions.;In Chapters 3 and 4 we turn our attention to data in the shape of a tensor. It is even more crucial in this case than in the matrix case to find a decomposition, not only because it provides hidden information about the data, but also because it allows us to store the tensor more concisely. However, one of the biggest obstacles in the field is that finding a decomposition of a general tensor is NP-hard. Inspired by the spectral theorem and the singular value decomposition for matrices, we study tensors whose decomposition consists of elements with an orthogonality structure. We call such tensors orthogonally decomposable, or odeco. One of their best properties is that, like matrices, odeco tensors can be decomposed efficiently. In Chapter 3 we study the spectral properties of such tensors. We give a formula for their eigenvectors and singular vector tuples. We note that computing these for a general tensor is hard both algebraically and computationally. In Chapter 4 we study the variety of orthogonally decomposable tensors, and we give polynomial equations that cut it out. We do this by showing that a tensor is orthogonally decomposable if and only if a given algebra that arises from it is associative, yet another appealing property of odeco tensors. Despite all of these appealing properties, odeco tensors constitute a very low-dimensional variety. This is why in Section 4.2 we conclude our study of tensors by generalizing the notion of orthogonally decomposable tensors to that of frame decomposable tensors, which now cover the space of all tensors.;In Chapter 5 we study super-resolution imaging. The aim here is, given a low-resolution blurred image, to increase the resolution and remove the blur. This is achieved by decomposing the image into a sum of simpler images, one for each point source of light. We encode the locations of the point sources of light and their intensities in a discrete measure, and propose a convex optimization problem in the space of measures to find this unknown measure. We show that in the absence of noise and in the case of a one-dimensional image, the global optimum of this optimization problem recovers the true locations.
机译:在本文中,我们将代数几何技术应用于优化和统计问题。特别是,我们考虑采用矩阵,张量或图像形式的数据,并研究如何分解它,以便找到有关其起源和形成的其他看似隐藏的信息。我们证明了这种分解的实际应用是由吸引人的代数和几何结构所补充的。在本论文的第二章中,我们着重于矩阵形状的数据。奇异值分解是现代算法的核心,可以高效地找到,它并不总是足以捕获数据的结构。通常,要求手头的矩阵及其分解元素具有一定的正性结构,我们需要设计算法和理论来利用这种结构。例如,统计混合模型基于发现非负矩阵的非负分解。我们将在第2.1节中研究此类分解的代数和几何性质。非负矩阵分解的另一种类型是正半定分解,它可用于凸优化和量子信息论中。在这里,我们要求分解的元素是给定大小的正半定矩阵。我们将在2.2节中探讨这一概念。非负矩阵最吸引人的特性之一是我们可以从一对嵌套多面体的角度来思考它。在研究非负和正半定分解时,我们依赖于这种几何解释。在第3章和第4章中,我们将注意力转向张量形状的数据。在这种情况下,比在矩阵情况下找到分解更为关键,这不仅是因为它提供了有关数据的隐藏信息,而且还因为它允许我们更简洁地存储张量。但是,该领域的最大障碍之一是发现普通张量的分解是NP难的。受谱定理和矩阵奇异值分解的启发,我们研究了张量,其分解由具有正交结构的元素组成。我们称这种张量为正交可分解的,即odeco。它们最好的特性之一是,像矩阵一样,odeco张量可以被有效地分解。在第三章中,我们研究了这种张量的光谱特性。我们给出它们的特征向量和奇异向量元组的公式。我们注意到,对于一般张量而言,计算这些困难在代数和计算上都是困难的。在第4章中,我们研究了正交可分解张量的各种形式,并给出了多项式方程式将其切掉。我们这样做是通过证明张量在且仅当由它产生的给定代数具有缔合性时才可正交分解,而代数张量的又一个吸引人的特性。尽管具有所有这些吸引人的特性,但odeco张量却构成了非常低维的变化。这就是为什么在第4.2节中通过将正交可分解张量的概念推广到框架可分解张量的概念来结束对张量的研究的原因,框架可分解张量现在涵盖了所有张量的空间。在第5章中,我们研究了超分辨率成像。在给定低分辨率模糊图像的情况下,此处的目的是提高分辨率并消除模糊。这是通过将图像分解为更简单的图像的总和而获得的,每个点光源一个。我们以离散量度对点光源的位置及其强度进行编码,并在量度空间中提出凸优化问题以找到此未知量度。我们表明,在没有噪声的情况下,在一维图像的情况下,此优化问题的全局最优值可以恢复真实位置。

著录项

  • 作者

    Robeva, Elina Mihaylova.;

  • 作者单位

    University of California, Berkeley.;

  • 授予单位 University of California, Berkeley.;
  • 学科 Mathematics.;Computer science.;Statistics.
  • 学位 Ph.D.
  • 年度 2016
  • 页码 195 p.
  • 总页数 195
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号