de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Bayesian online multi-task learning using regularization networks

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83886

Dinuzzo,  F
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Pillonetto, G., Dinuzzo, F., & De Nicolao, G. (2008). Bayesian online multi-task learning using regularization networks. In 2008 American Control Conference (ACC 2008) (pp. 4517-4522). Piscataway, NJ, USA: IEEE Service Center.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-C8F3-0
Abstract
Recently, standard single-task kernel methods have been extended to the case of multi-task learning under the framework of regularization. Experimental results have shown that such an approach can perform much better than single-task techniques, especially when few examples per task are available. However, a possible drawback may be computational complexity. For instance, when using regularization networks, complexity scales as the cube of the overall number of data associated with all the tasks. In this paper, an efficient computational scheme is derived for a widely applied class of multi-task kernels. More precisely, a quadratic loss is assumed and the multi-task kernel is the sum of a common term and a task-specific one. The proposed algorithm performs online learning recursively updating the estimates as new data become available. The learning problem is formulated in a Bayesian setting. The optimal estimates are obtained by solving a sequence of subproblems which involve projection of random variables onto suitable subspaces. The algorithm is tested on a simulated data set.