de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Unifying Divergence Minimization and Statistical Inference Via Convex Duality

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83782

Altun,  Y
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Altun, Y. (2006). Unifying Divergence Minimization and Statistical Inference Via Convex Duality. Learning Theory: 19th Annual Conference on Learning Theory (COLT 2006), 139-153.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-D143-A
Abstract
In this paper we unify divergence minimization and statistical inference by means of convex duality. In the process of doing so, we prove that the dual of approximate maximum entropy estimation is maximum a posteriori estimation as a special case. Moreover, our treatment leads to stability and convergence bounds for many statistical learning problems. Finally, we show how an algorithm by Zhang can be used to solve this class of optimization problems efficiently.