首页> 外文学位 >Random projection methods for stochastic convex minimization.
【24h】

Random projection methods for stochastic convex minimization.

机译:随机凸最小化的随机投影方法。

获取原文
获取原文并翻译 | 示例

摘要

The first focus of this thesis is to solve a stochastic convex minimization problem over an arbitrary family of nonempty, closed and convex sets. The problem has random features. Gradient or subgradient of objective function carries stochastic errors. Number of constraint sets can be extensive or infinitely many. Constraint sets might not be known apriori yet revealed through random realizations or randomly chosen from a collection of constraint sets throughout the horizon as in online learning concept. The traditional projection algorithms for solving minimization problems require projecting over complete collection of constraints at once or over a subset of them based on a predefined selection rule. But in practical applications either all of the constraints might not be known apriori or even if they are known projecting on the intersection set might be computationally prohibitive. We propose a two step gradient/subgradient iterative method with random projections. As the first step, a random gradient /subgradient projection is performed before observing the random constraint set realization. After taking random gradient /subgradient projection step we reach an intermittent point, which we obtained without considering the feasibility violation. Once the set realization is revealed or chosen within collection of constraint sets, the feasibility violation of intermittent point is corrected. We showed that projecting onto a random subcollection of them using our algorithm with diminishing stepsize is sufficient to converge to the solution set almost surely. Also the convergence of the algorithm for constant and nondiminishing nonsummable stepsizes are proved within an error bound. As the first set of experiments we tested the performance of the algorithm over a dynamic control system. We study three versions of the problem with correlated unknown-but-bounded additive noise, uncorrelated unknown-but-bounded additive noise and uncorrelated bounded output and peak input additive noise under fully known system description cases. It is essentially a robust least squares estimation problem where we recover state parameters from corrupted input and output data. We reformulated the linear least squares estimation problem as a stochastic convex minimization problem and then used the two step random projection algorithm to solve it. Although the problem has infinite number of constraints due to each realization of error term within bounded set, the algorithm goes through a finite subset of them and converges to the solution set. We also prove the existence of solution and provide equivalent minimization formulations or upper bound for these three types of robust least squares problems. We used standard subgradient algorithm to gauge the performance of our method. The implementation results are comparable to the ones found in literature. Our next focus is to solve a stochastic convex feasibility problem. We explored an algorithmic approach to solve both consistent and inconsistent convex feasibility problems for closed convex uncertain sets. We concentrated our attention on uncertain nature of sets and finding a feasible point using a random subcollection of them. The sets we consider might carry uncertainty due to inaccurate or imprecise spatial, spectral, stochastic information and confidence levels. For this objective we consider a stochastic optimization problem of minimizing an expected weighted proximity function over a collection of closed, convex sets. We show that the proposed algorithm converges to a point in the solution set when solution set is nonempty. In case of inconsistent feasibility problem i.e. the intersection of closed convex constraint sets being empty the algorithm minimizes the proximity function. The projection onto a subcollection of sets approach can be viewed as somewhere between random implementation of alternating projection method and parallel projection method. But our method is not deterministic. It uses random projections onto sets that carry additive bounded noise. Each realization within the bounded disturbances has equal chance of occurence. The conventional approach of set theoretic estimation problems provide solution that confirm with constraint sets known a priori or observed. But it fails to take into account that sets built on a priori or observed data may carry disturbances or have erroneously predicted statistical information, which may result in inconsistent sets. The attributes of original signal such as amplitude bound, region of support, band-limitedness that are used to built sets in estimation problems may not be accurate. Additionally sets that are built using moments, spectral properties, distribution and bounds information are based on predicted stochastic estimations. The overly conservative confidence bounds or statistical assumptions may cause inconsistencies. Also noise pertubations in measurements or random variations in the impulse response of a system can cause inconsistencies. Our algorithm projects onto a subcollection of sets some of which carry a random realization of noise on it. The implementation results show that the algorithm converges to the solution asymptotically even if the algorithm projects onto a random subcollection of sets at each iteration. All in all this thesis work presents iterative methods to solve stochastic convex minimization problems and stochastic convex set intersection problems. The almost sure convergence of algorithms are proven. And the performance of them are shown on numerical experiments.
机译:本文的首要重点是解决任意一组非空,封闭和凸集上的随机凸最小化问题。该问题具有随机特征。目标函数的梯度或次梯度带有随机误差。约束集的数量可以广泛,也可以无限多个。约束集可能不是先验的,但可以通过随机实现来揭示,也可以像在线学习概念中那样,从整个视野中的约束集集合中随机选择。用于解决最小化问题的传统投影算法需要基于预定义的选择规则一次投影整个约束集合或投影约束的子集。但是在实际应用中,所有约束要么不是先验已知的,要么即使是已知的,在相交集合上的投影可能在计算上是禁止的。我们提出了一种具有随机投影的两步梯度/次梯度迭代方法。第一步,在观察随机约束集实现之前执行随机梯度/次梯度投影。在采取随机梯度/次梯度投影步骤之后,我们获得了一个间歇点,该间歇点是在不考虑可行性违反的情况下获得的。一旦在约束集集合中揭示或选择了集合实现,就可以纠正间歇点的可行性违规。我们证明了使用逐步减小的算法使用它们投影到它们的随机子集合上,足以几乎肯定地收敛到解集。在误差范围内,证明了恒定和不减小的不可求步长算法的收敛性。作为第一组实验,我们在动态控制系统上测试了算法的性能。在已知的系统描述情况下,我们研究了具有相关的未知但有界的加性噪声​​,不相关的未知但有界的加性噪声​​以及无相关的有界输出和峰值输入相加噪声的问题的三个版本。从本质上讲,这是一个鲁棒的最小二乘估计问题,我们从损坏的输入和输出数据中恢复状态参数。我们将线性最小二乘估计问题重新构造为随机凸最小化问题,然后使用两步随机投影算法对其进行求解。尽管由于有界集合内误差项的每次实现,该问题具有无限数量的约束,但是该算法将遍历它们的有限子集并收敛到解集。我们还证明了解的存在,并为这三种类型的鲁棒最小二乘问题提供了等效的最小化公式或上限。我们使用标准的次梯度算法来评估我们方法的性能。实施结果与文献中的结果相当。我们的下一个重点是解决随机凸凸可行性问题。我们探索了一种算法方法来解决闭合凸不确定集的一致和不一致凸可行性问题。我们将注意力集中在集合的不确定性质上,并使用它们的随机子集合来寻找可行点。由于空间,频谱,随机信息和置信度的不准确或不准确,我们认为的集合可能带有不确定性。为此目的,我们考虑了一个随机优化问题,即在闭合的凸集的集合上使期望的加权接近函数最小化。我们表明,当解集为非空时,所提出的算法收敛到解集中的一个点。在不一致的可行性问题的情况下,即闭合凸约束集的交集为空时,该算法会最小化邻近函数。可以将投影到集合子集方法上看作是随机投影方法与并行投影方法之间的某种实现。但是我们的方法不是确定性的。它使用随机投影到携带加性有界噪声的集合上。有界干扰内的每个实现都有相等的发生机会。集合理论估计问题的常规方法提供了一种解决方案,该解决方案使用先验已知或观察到的约束集进行确认。但是,它没有考虑到建立在先验或观察数据之上的集合可能会带来干扰或具有错误的预测统计信息,这可能会导致不一致的集合。原始信号的属性,例如幅度边界,支持区域,带宽限制,这些属性用于在估计问题中构建集合,可能不准确。使用力矩,频谱特性建立的其他集合,分布和界限信息基于预测的随机估计。过于保守的置信范围或统计假设可能会导致不一致。此外,测量中的噪声干扰或系统脉冲响应的随机变化也会导致不一致。我们的算法投影到集合的子集合上,其中一些子集在其上随机产生噪声。实现结果表明,即使算法在每次迭代中投影到集合的随机子集合上,该算法也会渐近收敛于解。总而言之,本文的工作提出了迭代方法来解决随机凸最小化问题和随机凸集相交问题。算法几乎可以肯定地收敛了。数值实验表明了它们的性能。

著录项

  • 作者

    Tursun, Umit Deniz.;

  • 作者单位

    University of Illinois at Urbana-Champaign.;

  • 授予单位 University of Illinois at Urbana-Champaign.;
  • 学科 Industrial engineering.;Applied mathematics.
  • 学位 Ph.D.
  • 年度 2013
  • 页码 103 p.
  • 总页数 103
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号