de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

A Weakly Supervised Model for Sentence-level Semantic Orientation Analysis with Multiple Experts

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons45231

Qu,  Lizhen
Databases and Information Systems, MPI for Informatics, Max Planck Society;
International Max Planck Research School, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons44484

Gemulla,  Rainer
Databases and Information Systems, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons45720

Weikum,  Gerhard
Databases and Information Systems, MPI for Informatics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Qu, L., Gemulla, R., & Weikum, G. (2012). A Weakly Supervised Model for Sentence-level Semantic Orientation Analysis with Multiple Experts. In 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (pp. 149-159). Stroudsburg, PA: ACL.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0014-5FAF-0
Abstract
We propose the weakly supervised \emph{Multi-Experts Model} (MEM) for analyzing the semantic orientation of opinions expressed in natural language reviews. In contrast to most prior work, MEM predicts both opinion polarity and opinion strength at the level of individual sentences; such fine-grained analysis helps to understand better why users like or dislike the entity under review. A key challenge in this setting is that it is hard to obtain sentence-level training data for both polarity and strength. For this reason, MEM is weakly supervised: It starts with potentially noisy indicators obtained from coarse-grained training data (i.e., document-level ratings), a small set of diverse base predictors, and, if available, small amounts of fine-grained training data. We integrate these noisy indicators into a unified probabilistic framework using ideas from ensemble learning and graph-based semi-supervised learning. Our experiments indicate that MEM outperforms state-of-the-art methods by a significant margin.