首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >A Divide-and-Conquer Method for Scalable Robust Multitask Learning
【24h】

A Divide-and-Conquer Method for Scalable Robust Multitask Learning

机译:可扩展的鲁棒多任务学习的分治法

获取原文
获取原文并翻译 | 示例
           

摘要

Multitask learning (MTL) aims at improving the generalization performance of multiple tasks by exploiting the shared factors among them. An important line of research in the MTL is the robust MTL (RMTL) methods, which use trace-norm regularization to capture task relatedness via a low-rank structure. The existing algorithms for the RMTL optimization problems rely on the accelerated proximal gradient (APG) scheme that needs repeated full singular value decomposition (SVD) operations. However, the time complexity of a full SVD is for an RMTL problem with tasks and features, which becomes unaffordable in real-world MTL applications that often have a large number of tasks and high-dimensional features. In this paper, we propose a scalable solution for large-scale RMTL, with either the least squares loss or the squared hinge loss, by a divide-and-conquer method. The proposed method divides the original RMTL problem into several size-reduced subproblems, solves these cheaper subproblems in parallel by any base algorithm (e.g., APG) for RMTL, and then combines the results to obtain the final solution. Our theoretical analysis indicates that, with high probability, the recovery errors of the proposed divide-and-conquer algorithm are bounded by those of the base algorithm. Furthermore, in order to solve the subproblems with the least squares loss or the squared hinge loss, we propose two efficient base algorithms based on the linearized alternating direction method, respectively. Experimental results demonstrate that, with little loss of accuracy, our method is substantially faster than the state-of-the-art APG algorithms for RMTL.
机译:多任务学习(MTL)旨在通过利用任务之间的共享因素来提高多个任务的泛化性能。 MTL中的一项重要研究是健壮的MTL(RMTL)方法,该方法使用跟踪规范正则化通过低等级结构捕获任务相关性。用于RMTL优化问题的现有算法依赖于加速近端梯度(APG)方案,该方案需要重复进行完全奇异值分解(SVD)操作。但是,完整SVD的时间复杂性是针对具有任务和功能的RMTL问题,这在通常具有大量任务和高维特征的实际MTL应用程序中变得难以承受。在本文中,我们通过分而治之的方法提出了一种具有最小平方损失或平方铰链损失的大规模RMTL可扩展解决方案。所提出的方法将原始的RMTL问题分成几个尺寸减小的子问题,并通过任何基本算法(例如APG)来并行解决这些更便宜的子问题,然后组合结果以获得最终解决方案。我们的理论分析表明,所提出的分治算法的恢复错误很有可能受到基本算法的限制。此外,为了解决最小平方损失或平方铰链损失的子问题,我们分别提出了两种基于线性交替方向方法的有效基本算法。实验结果表明,在精度损失不大的情况下,我们的方法比用于RMTL的最新APG算法要快得多。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号