We use cookies to improve your experience with our site.

任务特定核下基于核方法的多任务学习

A Kernel Approach to Multi-Task Learning with Task-Specific Kernels

  • 摘要: 基于核方法的多任务学习通过定义正则项综合利用任务间的关系来提高学习的整体准确率.现有的方法假设所有任务共享相同的核.这限制了这些方法的应用,因为在实际中不同的任务可能需要不同的核.为多任务引入多核的主要挑战来自于不同再生核希尔伯特空间中的模型不可比,这使得我们很难在多任务学习中考虑任务间的关系.本文通过在平方可积空间中定义学习来解决这个问题.具体来说,我们通过在平方可积空间中定义正则项来表示任务间的关系,提出了一个新的基于核的多任务学习方法.我们证明了一个新的表示定理,给出了分布未知下解的形式,并分析了这个解和理论解的一致性.我们讨论了所提新方法和已有方法的关系,并给出了一个基于支持向量机的具体实现.我们在一个人造数据集和两个真实数据集上的实验说明了所提新方法可以有效地提高已有方法的效果.

     

    Abstract: Several kernel-based methods for multi-task learning have been proposed, which leverage relations among tasks as regularization to enhance the overall learning accuracies. These methods assume that the tasks share the same kernel, which could limit their applications because in practice different tasks may need different kernels. The main challenge of introducing multiple kernels into multiple tasks is that models from different reproducing kernel Hilbert spaces (RKHSs) are not comparable, making it difficult to exploit relations among tasks. This paper addresses the challenge by formalizing the problem in the square integrable space (SIS). Specially, it proposes a kernel-based method which makes use of a regularization term defined in SIS to represent task relations. We prove a new representer theorem for the proposed approach in SIS. We further derive a practical method for solving the learning problem and conduct consistency analysis of the method. We discuss the relationship between our method and an existing method. We also give an SVM (support vector machine)- based implementation of our method for multi-label classification. Experiments on an artificial example and two real-world datasets show that the proposed method performs better than the existing method.

     

/

返回文章
返回