LIL Reading Group
This is the wiki page for an informal NLP reading group. While one person will be officially leading the group in each session, the group will be structured in the form of a discussion.
We will be meeting at CSE 624 on Thursdays at 4:30. We also have a mailing list XXX.
|11/18/2010||Generalized Expectation Criteria for Bootstrapping Extractors using Record-Text Alignment||Kedar Bellare, Andrew McCallum||http://www.cs.umass.edu/~kedarb/papers/dbie_ge_align.pdf||Luke or Raphael|| EMNLP-09
ABSTRACT: Traditionally, machine learning approaches for information extraction require human annotated data that can be costly and time-consuming to produce. However, in many cases, there already exists a database (DB) with schema related to the desired output, and records related to the expected input text. We present a conditional random field (CRF) that aligns tokens of a given DB record and its realization in text. The CRF model is trained using only the available DB and unlabeled text with generalized expectation criteria. An annotation of the text induced from inferred alignments is used to train an information extractor. We evaluate our method on a citation extraction task in which alignments between DBLP database records and citation texts are used to train an extractor. Experimental results demonstrate an error reduction of 35% over a previous state-of-the-art method that uses heuristic alignments.
|12/2/2010||Posterior Regularization for Structured Latent Variable Models||Kuzman Ganchev, João Graça, Jennifer Gillenwater, Ben Taskar||http://www.seas.upenn.edu/~taskar/pubs/pr_jmlr10.pdf||Yoav|| Journal of Machine Learning Research-2010
ABSTRACT: We present posterior regularization, a probabilistic framework for structured, weakly supervised learning. Our framework efficiently incorporates indirect supervision via constraints on posterior distributions of probabilistic models with latent variables. Posterior regularization separates model complexity from the complexity of structural constraints it is desired to satisfy. By directly imposing decomposable regularization on the posterior moments of latent variables during learning, we retain the computational efficiency of the unconstrained model while ensuring desired constraints hold in expectation. We present an efficient algorithm for learning with posterior regularization and illustrate its versatility on a diverse set of structural constraints such as bijectivity, symmetry and group sparsity in several large scale experiments, including multi-view learning, cross-lingual dependency grammar induction, unsupervised part-of-speech induction, and bitext word alignment
|Posterior Regularization for Structured Latent Variable Models||Kuzman Ganchev, João Graça, Jennifer Gillenwater, Ben Taskar||http://www.seas.upenn.edu/~taskar/pubs/pr_jmlr10.pdf||Journal of Machine Learning Research-2010|
|Discriminative Learning over Constrained Latent Representations||Ming-Wei Chang and Dan Goldwasser and Dan Roth and Vivek Srikumar||http://l2r.cs.uiuc.edu/~danr/Papers/CGRS10.pdf||NAACL-10|
|On the Use of Virtual Evidence in Conditional Random Fields||Xiao Li||http://research.microsoft.com/apps/pubs/default.aspx?id=81061||EMNLP-09|
|Prototype-driven learning for sequence models||Aria Haghighi,Dan Klein||http://portal.acm.org/citation.cfm?id=1220876||NAACL-06|
|Generalized Expectation Criteria||Andrew McCallum, Gideon Mann, Gregory Druck||http://www.cs.umass.edu/~mccallum/papers/ge08note.pdf||Technical Report-07|
|Learning from Labeled Features using Generalized Expectation Criteria||Gregory Druck, Gideon Mann, Andrew McCallum||http://www.cs.umass.edu/~mccallum/papers/druck08sigir.pdf||SIGIR-08|