日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細

登録内容を編集ファイル形式で保存
 
 
ダウンロード電子メール
  Human Computing and Crowdsourcing Methods for Knowledge Acquisition

Kondreddi, S. K. (2014). Human Computing and Crowdsourcing Methods for Knowledge Acquisition. PhD Thesis, Universität des Saarlandes, Saarbrücken.

Item is

基本情報

表示: 非表示:
資料種別: 学位論文

ファイル

表示: ファイル

関連URL

表示:
非表示:
URL:
http://scidok.sulb.uni-saarland.de/volltexte/2014/5794/ (全文テキスト(全般))
説明:
-
OA-Status:
Green
説明:
-
OA-Status:
Not specified

作成者

表示:
非表示:
 作成者:
Kondreddi, Sarath Kumar1, 2, 著者           
Triantafillou, Peter1, 学位論文主査           
Berberich, Klaus1, 監修者           
所属:
1Databases and Information Systems, MPI for Informatics, Max Planck Society, ou_24018              
2International Max Planck Research School, MPI for Informatics, Max Planck Society, Campus E1 4, 66123 Saarbrücken, DE, ou_1116551              

内容説明

表示:
非表示:
キーワード: -
 要旨: Ambiguity, complexity, and diversity in natural language textual expressions
are major hindrances to automated knowledge extraction. As a result
state-of-the-art methods for extracting entities and relationships from
unstructured data make incorrect extractions or produce noise. With the advent
of human computing, computationally hard tasks have been addressed through
human inputs. While text-based knowledge acquisition can benefit from this
approach, humans alone cannot bear the burden of extracting knowledge from the
vast textual resources that exist today. Even making payments for crowdsourced
acquisition can quickly become prohibitively expensive.
In this thesis we present principled methods that effectively garner human
computing inputs for improving the extraction of knowledge-base facts from
natural language texts. Our methods complement automatic extraction techniques
with human computing to reap the benefits of both while overcoming each other�s
limitations. We present the architecture and implementation of HIGGINS, a
system that combines an information extraction (IE) engine with a human
computing (HC) engine to produce high quality facts. The IE engine combines
statistics derived from large Web corpora with semantic resources like WordNet
and ConceptNet to construct a large dictionary of entity and relational
phrases. It employs specifically designed statistical language models for
phrase relatedness to come up with questions and relevant candidate answers
that are presented to human workers. Through extensive experiments we establish
the superiority of this approach in extracting relation-centric facts from
text. In our experiments we extract facts about fictitious characters in
narrative text, where the issues of diversity and complexity in expressing
relations are far more pronounced. Finally, we also demonstrate how interesting
human computing games can be designed for knowledge acquisition tasks.

資料詳細

表示:
非表示:
言語: eng - English
 日付: 2014-05-062014-05-122014
 出版の状態: 出版
 ページ: 116 p.
 出版情報: Saarbrücken : Universität des Saarlandes
 目次: -
 査読: -
 識別子(DOI, ISBNなど): BibTex参照ID: Kondreddi2014b
URN: urn:nbn:de:bsz:291-scidok-57948
DOI: 10.22028/D291-26564
その他: hdl:20.500.11880/26620
 学位: 博士号 (PhD)

関連イベント

表示:

訴訟

表示:

Project information

表示:

出版物

表示: