首页> 外文期刊>IEEE Transactions on Information Theory >Fast and Sample-Efficient Federated Low Rank Matrix Recovery From Column-Wise Linear and Quadratic Projections
【24h】

Fast and Sample-Efficient Federated Low Rank Matrix Recovery From Column-Wise Linear and Quadratic Projections

机译:Fast and Sample-Efficient Federated Low Rank Matrix Recovery From Column-Wise Linear and Quadratic Projections

获取原文
获取原文并翻译 | 示例
           

摘要

We study the following lesser-known low rank (LR) recovery problem: recover an $n times q$ rank- $r$ matrix, ${ boldsymbol {X}}^{ast}=[boldsymbol {x}^{ast}_{1}, boldsymbol {x}^{ast}_{2}, ldots, boldsymbol {x}^{ast}_{q}]$ , with $r ll min (n,q)$ , from $m$ independent linear projections of each of its $q$ columns, i.e., from $boldsymbol {y}_{k}:= boldsymbol {A}_{k} boldsymbol {x}^{ast}_{k}, k in [q]$ , when $boldsymbol {y}_{k}$ is an $m$ -length vector with $m n$ . The matrices $boldsymbol {A}_{k}$ are known and mutually independent for different $k$ . We introduce a novel gradient descent (GD) based solution called AltGD-Min. We show that, if the $boldsymbol {A}_{k}text{s}$ are i.i.d. with i.i.d. Gaussian entries, and if the right singular vectors of ${ boldsymbol {X}}^{ast}$ satisfy the incoherence assumption, then $epsilon $ -accurate recovery of ${ boldsymbol {X}}^{ast}$ is possible with order $(n+q) r^{2} log (1/epsilon)$ total samples and order $mq nr log (1/epsilon)$ time. Compared with existing work, this is the fastest solution. For $epsilon r^{1/4}$ , it also has the best sample complexity. A simple extension of AltGD-Min also provably solves LR Phase Retrieval, which is a magnitude-only generalization of the above problem. AltGD-Min factorizes the unknown ${ boldsymbol {X}}$ as ${ boldsymbol {X}}= { boldsymbol {U}} boldsymbol {B} $ where ${ boldsymbol {U}}$ and $boldsymbol {B}$ are matrices with $r$ columns and rows respectively. It alternates between a (projected) GD step for updating ${ boldsymbol {U}}$ , and a minimization step for updating $boldsymbol {B}$ . Its each iteration is as fast as that of regular projected GD because the minimization over $boldsymbol {B}$ decouples column-wise. At the same time, we can prove exponential error decay for it, which we are unable to for projected GD. Finally, it can also be efficiently federated with a communication cost of only $nr$ per node, instead of $nq$ for projected GD.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号