首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Efficient Recovery of Low-Rank Matrix via Double Nonconvex Nonsmooth Rank Minimization
【24h】

Efficient Recovery of Low-Rank Matrix via Double Nonconvex Nonsmooth Rank Minimization

机译:通过双重非凸非光滑秩最小化有效地恢复低秩矩阵

获取原文
获取原文并翻译 | 示例
           

摘要

Recently, there is a rapidly increasing attraction for the efficient recovery of low-rank matrix in computer vision and machine learning. The popular convex solution of rank minimization is nuclear norm-based minimization (NNM), which usually leads to a biased solution since NNM tends to overshrink the rank components and treats each rank component equally. To address this issue, some nonconvex nonsmooth rank (NNR) relaxations have been exploited widely. Different from these convex and nonconvex rank substitutes, this paper first introduces a general and flexible rank relaxation function named weighted NNR relaxation function, which is actually derived from the initial double NNR (DNNR) relaxations, i.e., DNNR relaxation function acts on the nonconvex singular values function (SVF). An iteratively reweighted SVF optimization algorithm with continuation technology through computing the supergradient values to define the weighting vector is devised to solve the DNNR minimization problem, and the closed-form solution of the subproblem can be efficiently obtained by a general proximal operator, in which each element of the desired weighting vector usually satisfies the nondecreasing order. We next prove that the objective function values decrease monotonically, and any limit point of the generated subsequence is a critical point. Combining the Kurdyka-Lojasiewicz property with some milder assumptions, we further give its global convergence guarantee. As an application in the matrix completion problem, experimental results on both synthetic data and real-world data can show that our methods are competitive with several state-of-the-art convex and nonconvex matrix completion methods.
机译:最近,在计算机视觉和机器学习中有效恢复低阶矩阵的吸引力迅速增加。普遍的秩最小化凸解决方案是基于核规范的最小化(NNM),这通常会导致有偏差的解决方案,因为NNM倾向于过度缩小秩分量并平等地对待每个秩分量。为了解决这个问题,已经广泛地利用了一些非凸不光滑秩(NNR)松弛。与这些凸和非凸秩替代不同,本文首先介绍了一个通用的灵活秩松弛函数,称为加权NNR松弛函数,它实际上是从初始双NNR(DNNR)松弛导出的,即,DNNR松弛函数作用于非凸奇异函数值函数(SVF)。为了解决DNNR最​​小化问题,设计了一种具有连续技术的迭代加权SVF优化算法,该算法通过计算超梯度值来定义加权矢量,从而解决了DNNR最​​小化问题,并且可以由一般的近端算子有效地获得子问题的闭式解。所需加权向量的元素通常满足非递减顺序。接下来,我们证明目标函数值单调递减,并且所生成子序列的任何极限点都是临界点。将Kurdyka-Lojasiewicz属性与一些较温和的假设相结合,我们进一步给出其全局收敛性保证。作为矩阵完成问题的一种应用,对合成数据和实际数据的实验结果都表明,我们的方法与几种最新的凸面和非凸面矩阵完成方法相比具有竞争力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号