Fundamentals of parameterized complexity (Q383833): Difference between revisions
From MaRDI portal
Changed an Item |
Set OpenAlex properties. |
||
(3 intermediate revisions by 3 users not shown) | |||
Property / author | |||
Property / author: Q494055 / rank | |||
Property / MaRDI profile type | |||
Property / MaRDI profile type: MaRDI publication profile / rank | |||
Normal rank | |||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.1007/978-1-4471-5559-1 / rank | |||
Normal rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W2116087731 / rank | |||
Normal rank | |||
links / mardi / name | links / mardi / name | ||
Latest revision as of 19:15, 19 March 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Fundamentals of parameterized complexity |
scientific article |
Statements
Fundamentals of parameterized complexity (English)
0 references
6 December 2013
0 references
The standard way to investigate the efficiency of an algorithm \(A\) that solves a computational problem \(P\) is to do a worst-case analysis: One considers the input length as a parameter, typically denoted by \(n\), and stipulates that the time spent by \(A\) to solve \(P\) is a function of \(n\), defined as the maximum running time of \(A\) taken over all inputs of length \(n\). This one-dimensional approach is often too rough and the defining idea of parameterized complexity is to slice the input space in a finer way, considering other parameters as well, besides the input length. Many people consider that the authors of this book, Rodney Downey and Michael Fellows, have started the field with a series of papers in the 90s and especially with their book [Parameterized complexity. Berlin: Springer (1998; Zbl 0914.68076)]. This field is now solid and dynamic, with meaningful and deep results. It should be said from the beginning that the present book is not a revised version of the 1998 book. It is a new book that conveys a detailed picture of parameterized complexity as it stands today. The book is structured in nine parts. Part 1 (Chapters 1--2) presents some preliminaries from classical complexity theory and the basic definitions. Part 2 (Chapters 3--9) covers the most important techniques in parameterized algorithmics. These techniques include: bounded search trees, kernelization, iterative compression, measure-and-conquer, greedy localization, color coding, randomized divide-and-conquer, and others. Part 3 (Chapters 10--16) is dedicated to another family of techniques used in the design of parameterized algorithms that are based on structural properties of graphs (primarily), but also of other objects, such as matroids or polynomials. The generic approach is to use so-called width metrics which quantify how close is the input to having a certain graph property, such as being a tree, or a complete graph, and so on. The book presents treewidth, branchwidth, cliquewidth, and other width metrics. This part also covers structural properties with positive ramifications in algorithm design that are defined in terms of finite automata (Chapter 12), or that are based on certain logical characterization (the most important example being Courcelle's theorem regarding graphs that can be described in monadic second-order logic). Part 4 (Chapters 17--19) presents the Graph Minor Theorem (it contains a proof sketch), well-quasi-orderings, the obstruction principle, and their implications in the design of algorithms. Part 5 (Chapters 20--28) covers the complexity of parameterized computational problems, i.e., it presents ways in which one can give evidence that a given problem is hard. The basic class of hard parameterized problem is the class of \(\mathrm{W}[1]\)-complete problems, which plays a role similar to NP-completeness in standard complexity theory. \(\mathrm{W}[1]\) and \(\mathrm{W}[1]\)-complete problems are covered in Chapters 21--22. The \(\mathrm{W}[t]\) hierarchy and another important class, XP, are also discussed in this module of the book. Part 6 (Chapters 29--30) presents a series of results that indicate that certain algorithms are optimal up to constant factors, under reasonable hardness assumptions (such as, for example, the exponential time hypothesis). Kernelization lower bounds are also discussed here. Part 7 (Chapters 31--32) discusses the parametrized paradigm for approximation and counting problems, and its usage in the analysis of randomized algorithms. Part 8 (Chapter 33) is a short unit in which the authors discuss what, in their opinion, are the most important open problems of the field. Part 9 (Chapters 34--35) is an appendix on network flows and matchings. The writing style is direct and engaging, allowing the reader to quickly grasp the core ideas for each topic. The book coverage is comprehensive, including even relatively recent results, and manages to give a rich representation of the current state of the field. In short, this is an excellent book that will serve well anyone interested in algorithm design and complexity.
0 references
computational complexity
0 references
algorithms
0 references
parameterized complexity
0 references