de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Report

Large-scale Matrix Factorization with Distributed Stochastic Gradient Descent

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons44484

Gemulla,  Rainer
Databases and Information Systems, MPI for Informatics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Gemulla, R., Haas, P. J., Nijkamp, E., & Sismanis, Y.(2011). Large-scale Matrix Factorization with Distributed Stochastic Gradient Descent (Local-ID: C1256DBF005F876D-5B618B1FF070E981C125784D0044B0D1-gemulla11). San Jose, CA: IBM Research Division. Retrieved from http://www.almaden.ibm.com/cs/people/peterh/dsgdTechRep.pdf.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0010-147F-E
Abstract
As Web 2.0 and enterprise-cloud applications have proliferated, data mining algorithms increasingly need to be (re)designed to handle web-scale datasets. For this reason, low-rank matrix factorization has received a lot of attention in recent years, since it is fundamental to a variety of mining tasks, such as topic detection and collaborative filtering, that are increasingly being applied to massive datasets. We provide a novel algorithm to approximately factor large matrices with millions of rows, millions of columns, and billions of nonzero elements. Our approach rests on stochastic gradient descent (SGD), an iterative stochastic optimization algorithm; the idea is to exploit the special structure of the matrix factorization problem to develop a new ``stratified'' SGD variant that can be fully distributed and run on web-scale datasets using, e.g., MapReduce. The resulting distributed SGD factorization algorithm, called DSGD, provides good speed-up and handles a wide variety of matrix factorizations. We establish convergence properties of DSGD using results from stochastic approximation theory and regenerative process theory, and also describe the practical techniques used to optimize performance in our DSGD implementation. Experiments suggest that DSGD converges significantly faster and has better scalability properties than alternative algorithms.