Difference between revisions of "LIL Reading Group"
m (moved NLP reading group to LIL reading group)
Revision as of 18:22, 6 December 2010
This is the wiki page for an informal NLP reading group. While one person will be officially leading the group in each session, the group will be structured in the form of a discussion.
We will be meeting at CSE 624 on Thursdays at 4:30. We also have a mailing list lil-group.
|11/18/2010||Generalized Expectation Criteria for Bootstrapping Extractors using Record-Text Alignment||Kedar Bellare, Andrew McCallum||http://www.cs.umass.edu/~kedarb/papers/dbie_ge_align.pdf||Raphael|| EMNLP-09
ABSTRACT: Traditionally, machine learning approaches for information extraction require human annotated data that can be costly and time-consuming to produce. However, in many cases, there already exists a database (DB) with schema related to the desired output, and records related to the expected input text. We present a conditional random field (CRF) that aligns tokens of a given DB record and its realization in text. The CRF model is trained using only the available DB and unlabeled text with generalized expectation criteria. An annotation of the text induced from inferred alignments is used to train an information extractor. We evaluate our method on a citation extraction task in which alignments between DBLP database records and citation texts are used to train an extractor. Experimental results demonstrate an error reduction of 35% over a previous state-of-the-art method that uses heuristic alignments.
|12/2/2010||Posterior Regularization for Structured Latent Variable Models||Kuzman Ganchev, João Graça, Jennifer Gillenwater, Ben Taskar||http://www.seas.upenn.edu/~taskar/pubs/pr_jmlr10.pdf||Yoav|| Journal of Machine Learning Research-2010
ABSTRACT: We present posterior regularization, a probabilistic framework for structured, weakly supervised learning. Our framework efficiently incorporates indirect supervision via constraints on posterior distributions of probabilistic models with latent variables. Posterior regularization separates model complexity from the complexity of structural constraints it is desired to satisfy. By directly imposing decomposable regularization on the posterior moments of latent variables during learning, we retain the computational efficiency of the unconstrained model while ensuring desired constraints hold in expectation. We present an efficient algorithm for learning with posterior regularization and illustrate its versatility on a diverse set of structural constraints such as bijectivity, symmetry and group sparsity in several large scale experiments, including multi-view learning, cross-lingual dependency grammar induction, unsupervised part-of-speech induction, and bitext word alignment
|12/9/2010||Better Alignments = Better Translations?||Kuzman Ganchev, João Graça, Ben Taskar||http://www.seas.upenn.edu/~taskar/pubs/acl08.pdf||Mark|| ACL-08
ABSTRACT: Automatic word alignment is a key step in training statistical machine translation systems. Despite much recent work on word alignment methods, alignment accuracy increases often produce little or no improvements in machine translation quality. In this work we analyze a recently proposed agreement-constrained EM algorithm for unsupervised alignment models. We attempt to tease apart the effects that this simple but effective modification has on alignment precision and recall trade-offs, and how rare and common words are affected across several language pairs. We propose and extensively evaluate a simple method for using alignment models to produce alignments better-suited for phrase-based MT systems, and show significant gains (as measured by BLEU score) in end-to-end translation systems for six languages pairs used in recent MT competitions.
|Discriminative Learning over Constrained Latent Representations||Ming-Wei Chang and Dan Goldwasser and Dan Roth and Vivek Srikumar||http://l2r.cs.uiuc.edu/~danr/Papers/CGRS10.pdf||NAACL-10|
|On the Use of Virtual Evidence in Conditional Random Fields||Xiao Li||http://research.microsoft.com/apps/pubs/default.aspx?id=81061||EMNLP-09|
|Prototype-driven learning for sequence models||Aria Haghighi,Dan Klein||http://portal.acm.org/citation.cfm?id=1220876||NAACL-06|
|Generalized Expectation Criteria||Andrew McCallum, Gideon Mann, Gregory Druck||http://www.cs.umass.edu/~mccallum/papers/ge08note.pdf||Technical Report-07|
|Learning from Labeled Features using Generalized Expectation Criteria||Gregory Druck, Gideon Mann, Andrew McCallum||http://www.cs.umass.edu/~mccallum/papers/druck08sigir.pdf||SIGIR-08|