A tutorial on dual decomposition and Lagrangian relaxation for inference in natural language processing

From MaRDI portal
Publication:3143576

DOI10.1613/JAIR.3680zbMATH Open1280.68272arXiv1405.5208OpenAlexW2132678463MaRDI QIDQ3143576FDOQ3143576


Authors: Alexander M. Rush, Michael Collins Edit this on Wikidata


Publication date: 3 December 2012

Published in: Journal of Artificial Intelligence Research (Search for Journal in Brave)

Abstract: Dual decomposition, and more generally Lagrangian relaxation, is a classical method for combinatorial optimization; it has recently been applied to several inference problems in natural language processing (NLP). This tutorial gives an overview of the technique. We describe example algorithms, describe formal guarantees for the method, and describe practical issues in implementing the algorithms. While our examples are predominantly drawn from the NLP literature, the material should be of general relevance to inference problems in machine learning. A central theme of this tutorial is that Lagrangian relaxation is naturally applied in conjunction with a broad class of combinatorial algorithms, allowing inference in models that go significantly beyond previous work on Lagrangian relaxation for inference in graphical models.


Full work available at URL: https://arxiv.org/abs/1405.5208




Recommendations




Cited In (3)





This page was built for publication: A tutorial on dual decomposition and Lagrangian relaxation for inference in natural language processing

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3143576)