English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Large-scale Matrix Factorization with Distributed Stochastic Gradient Descent

Gemulla, R., Haas, P. J., Nijkamp, E., & Sismanis, Y.(2011). Large-scale Matrix Factorization with Distributed Stochastic Gradient Descent (Local-ID: C1256DBF005F876D-5B618B1FF070E981C125784D0044B0D1-gemulla11). San Jose, CA: IBM Research Division. Retrieved from http://www.almaden.ibm.com/cs/people/peterh/dsgdTechRep.pdf.

Item is

Files

show Files
hide Files
:
dsgdTechRep.pdf (Any fulltext), 494KB
 
File Permalink:
-
Name:
dsgdTechRep.pdf
Description:
-
OA-Status:
Visibility:
Private
MIME-Type / Checksum:
application/pdf
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show

Creators

show
hide
 Creators:
Gemulla, Rainer1, Author           
Haas, Peter J.2, Author
Nijkamp, Erik2, Author
Sismanis, Yannis2, Author
Affiliations:
1Databases and Information Systems, MPI for Informatics, Max Planck Society, ou_24018              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: -
 Abstract: As Web 2.0 and enterprise-cloud applications have proliferated, data mining algorithms increasingly need to be (re)designed to handle web-scale datasets. For this reason, low-rank matrix factorization has received a lot of attention in recent years, since it is fundamental to a variety of mining tasks, such as topic detection and collaborative filtering, that are increasingly being applied to massive datasets. We provide a novel algorithm to approximately factor large matrices with millions of rows, millions of columns, and billions of nonzero elements. Our approach rests on stochastic gradient descent (SGD), an iterative stochastic optimization algorithm; the idea is to exploit the special structure of the matrix factorization problem to develop a new ``stratified'' SGD variant that can be fully distributed and run on web-scale datasets using, e.g., MapReduce. The resulting distributed SGD factorization algorithm, called DSGD, provides good speed-up and handles a wide variety of matrix factorizations. We establish convergence properties of DSGD using results from stochastic approximation theory and regenerative process theory, and also describe the practical techniques used to optimize performance in our DSGD implementation. Experiments suggest that DSGD converges significantly faster and has better scalability properties than alternative algorithms.

Details

show
hide
Language(s): eng - English
 Dates: 2011
 Publication Status: Published online
 Pages: -
 Publishing info: San Jose, CA : IBM Research Division
 Table of Contents: -
 Rev. Type: -
 Identifiers: eDoc: 618949
URI: http://www.almaden.ibm.com/cs/people/peterh/dsgdTechRep.pdf
Other: Local-ID: C1256DBF005F876D-5B618B1FF070E981C125784D0044B0D1-gemulla11
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: IBM Research Report
Source Genre: Series
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: RJ10481 Sequence Number: - Start / End Page: - Identifier: -