...
首页> 外文期刊>Abstract and applied analysis >Least Square Regularized Regression for Multitask Learning
【24h】

Least Square Regularized Regression for Multitask Learning

机译:最小二乘正则回归的多任务学习

获取原文
   

获取外文期刊封面封底 >>

       

摘要

The study of multitask learning algorithms is one of very important issues. This paper proposes a least-square regularized regression algorithm for multi-task learning withhypothesis space being the union of a sequence of Hilbert spaces. The algorithm consists of two steps of selecting the optimal Hilbert space and searching for the optimal function. We assume that the distributions of different tasks are related to a set of transformations under which any Hilbert space in the hypothesis space is norm invariant. We prove that under the above assumption the optimal prediction function of every task is in the same Hilbert space. Based on this result, a pivotal error decomposition is founded, which can use samples of related tasks to bound excess error of the target task. We obtain an upper bound for the sample error of related tasks, and based on this bound, potential faster learning rates are obtained compared to single-task learning algorithms.
机译:多任务学习算法的研究是非常重要的问题之一。本文提出了一种最小二乘正则化回归算法,用于假设空间为希尔伯特空间序列的并集的多任务学习。该算法包括选择最佳希尔伯特空间和搜索最佳函数的两个步骤。我们假设不同任务的分布与一组转换有关,在该转换下假设空间中的任何希尔伯特空间都是范数不变的。我们证明在上述假设下,每个任务的最佳预测功能都在相同的希尔伯特空间中。基于此结果,建立了关键错误分解,可以使用相关任务的样本来约束目标任务的过多错误。我们为相关任务的样本错误获取了一个上限,并且基于此界限,与单任务学习算法相比,可以获得潜在的更快的学习速度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号