English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Fast Newton-type Methods for the Least Squares Nonnegative Matrix Approximation Problem

MPS-Authors
There are no MPG-Authors in the publication available
External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Kim, D., Sra, S., & Dhillon, I. (2007). Fast Newton-type Methods for the Least Squares Nonnegative Matrix Approximation Problem. In C. Apte, D. Skillicorn, B. Liu, & S. Parthasarathy (Eds.), 2007 SIAM International Conference on Data Mining (pp. 343-354). Pittsburgh, PA, USA: Society for Industrial and Applied Mathematics.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-CE1F-F
Abstract
Nonnegative Matrix Approximation is an effective matrix decomposition technique that has proven to be useful for a wide variety of applications ranging from document analysis and image processing to bioinformatics. There exist a few algorithms for nonnegative matrix approximation (NNMA), for example, Lee & Seung's multiplicative updates, alternating least squares, and certain gradient descent based procedures. All of these procedures suffer from either slow convergence, numerical instabilities, or at worst, theoretical un-soundness. In this paper we present new and improved algorithms for the least-squares NNMA problem, which are not only theoretically well-founded, but also overcome many of the deficiencies of other methods. In particular, we use non-diagonal gradient scaling to obtain rapid convergence. Our methods provide numerical results superior to both Lee & Seung's method as well to the alternating least squares (ALS) heuristic, which is known to work well in some situations but has no theoretical guarantees (Berry et al. 2006). Our approach extends naturally to include regularization and box-constraints, without sacrificing convergence guarantees. We present experimental results on both synthetic and real-world datasets to demonstrate the superiority of our methods, in terms of better approximations as well as efficiency.