首页> 外文学位 >Multi-Task Learning via Structured Regularization: Formulations, Algorithms, and Applications.
【24h】

Multi-Task Learning via Structured Regularization: Formulations, Algorithms, and Applications.

机译:通过结构化正则化的多任务学习:公式,算法和应用程序。

获取原文
获取原文并翻译 | 示例

摘要

Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It is particularly desirable to share the domain knowledge (among the tasks) when there are a number of related tasks but only limited training data is available for each task.;Modeling the relationship of multiple tasks is critical to the generalization performance of the MTL algorithms. In this dissertation, I propose a series of MTL approaches which assume that multiple tasks are intrinsically related via a shared low-dimensional feature space. The proposed MTL approaches are developed to deal with different scenarios and settings; they are respectively formulated as mathematical optimization problems of minimizing the empirical loss regularized by different structures. For all proposed MTL formulations, I develop the associated optimization algorithms to find their globally optimal solution efficiently. I also conduct theoretical analysis for certain MTL approaches by deriving the globally optimal solution recovery condition and the performance bound. To demonstrate the practical performance, I apply the proposed MTL approaches on different real-world applications: (1) Automated annotation of the Drosophila gene expression pattern images; (2) Categorization of the Yahoo web pages. Our experimental results demonstrate the efficiency and effectiveness of the proposed algorithms.
机译:多任务学习(MTL)旨在通过同时学习多个相关任务来提高(生成的分类器的)泛化性能。具体来说,MTL利用内在的任务相关性,基于此,每个任务的信息领域知识可以在多个任务之间共享,从而促进单个任务的学习。当存在许多相关任务但每个任务只能获得有限的训练数据时,特别希望在任务之间共享领域知识。对多个任务之间的关系进行建模对于MTL算法的泛化性能至关重要。在本文中,我提出了一系列MTL方法,这些方法假设多个任务通过共享的低维特征空间本质上相关。拟议的MTL方法被开发用于处理不同的场景和设置;它们分别表述为最小化由不同结构调整的经验损失的数学优化问题。对于所有建议的MTL公式,我开发了相关的优化算法,以有效地找到其全局最优解。我还通过推导全局最佳解决方案恢复条件和性能范围来对某些MTL方法进行理论分析。为了演示实际性能,我将提出的MTL方法应用于不同的实际应用中:(1)果蝇基因表达模式图像的自动注释; (2)Yahoo网页的分类。我们的实验结果证明了所提出算法的效率和有效性。

著录项

  • 作者

    Chen, Jianhui.;

  • 作者单位

    Arizona State University.;

  • 授予单位 Arizona State University.;
  • 学科 Computer Science.
  • 学位 Ph.D.
  • 年度 2011
  • 页码 143 p.
  • 总页数 143
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号