首页> 外文期刊>IEEE Transactions on Information Theory >A Dirty Model for Multiple Sparse Regression
【24h】

A Dirty Model for Multiple Sparse Regression

机译:多重稀疏回归的肮脏模型

获取原文
获取原文并翻译 | 示例
           

摘要

The task of sparse linear regression consists of finding an unknown sparse vector from linear measurements. Solving this task even under “high-dimensional” settings, where the number of samples is fewer than the number of variables, is now known to be possible via methods such as the LASSO. We consider the multiple sparse linear regression problem, where the task consists of recovering several related sparse vectors at once. A simple approach to this task would involve solving independent sparse linear regression problems, but a natural question is whether one can reduce the overall number of samples required by leveraging partial sharing of the support sets, or nonzero patterns, of the signal vectors. A line of recent research has studied the use of $ell _{1}/ell _{q}$ norm block-regularizations with $q > 1$ for such problems. However, depending on the level of sharing, these could actually perform worse in sample complexity when compared to solving each problem independently. We present a new “adaptive” method for multiple sparse linear regression that can leverage support and parameter overlap when it exists, but not pay a penalty when it does not. We show how to achieve this using a very simple idea: decompose the parameters into two components and regularize these differently. We show, theoretically and empirically, that our method strictly and noticeably outperforms both $ell _{1}$ or $ell _{1}/ell _{q}$ methods, over the entire range of possible overlaps (except at boundary cases, where we match the best method), even under high-dimensional scaling.
机译:稀疏线性回归的任务包括从线性测量中找到未知的稀疏向量。众所周知,即使在“高维”设置下也可以解决此任务,因为“高维”设置的样本数量少于变量数量,可以通过诸如LASSO之类的方法来解决。我们考虑多重稀疏线性回归问题,该任务包括一次恢复几个相关的稀疏向量。一种简单的方法是解决独立的稀疏线性回归问题,但是一个自然的问题是,通过利用信号向量的支持集或非零模式的部分共享,是否可以减少所需的样本总数。最近的一项研究研究了$ ell _ {1} / ell _ {q} $范式块正则化对于$ q> 1 $的使用。但是,与共享级别相比,与独立解决每个问题相比,它们实际上可能在样本复杂度方面表现较差。对于多重稀疏线性回归,我们提出了一种新的“自适应”方法,该方法可以在存在支撑和参数重叠时利用支撑,而在不存在支撑和参数重叠的情况下,不会付出任何代价。我们展示了如何使用一个非常简单的想法来实现此目的:将参数分解为两个组件,并对它们进行不同的正规化。我们从理论和经验上证明,在整个可能重叠范围内(边界情况除外),我们的方法在$ ell _ {1} $或$ ell _ {1} / ell _ {q} $方法上的性能均明显出色。 ,即使是在高比例缩放条件下,我们也可以采用最佳方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号