Parameterized Complexity

In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input. The complexity of a problem is then measured as a function in those parameters. This allows to classify NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured by the number of bits in the input. The first systematic work on parameterized complexity was done by Downey & Fellows (1999).

Under the assumption that P ≠ NP, there exist many natural problems that require superpolynomial running time when complexity is measured in terms of the input size only, but that are computable in a time that is polynomial in the input size and exponential or worse in a parameter k. Hence, if k is fixed at a small value and the growth of the function over k is relatively small then such problems can still be considered "tractable" despite their traditional classification as "intractable".

The existence of efficient, exact, and deterministic solving algorithms for NP-complete, or otherwise NP-hard, problems is considered unlikely, if input parameters are not fixed; all known solving algorithms for these problems require time that is exponential in the total size of the input. However, some problems can be solved by algorithms that are exponential only in the size of a fixed parameter while polynomial in the size of the input. Such an algorithm is called a fixed-parameter tractable (fpt-)algorithm, because the problem can be solved efficiently for small values of the fixed parameter.

Problems in which some parameter k is fixed are called parameterized problems. A parameterized problem that allows for such an fpt-algorithm is said to be a fixed-parameter tractable problem and belongs to the class, and the early name of the theory of parameterized complexity was fixed-parameter tractability.

Many problems have the following form: given an object and a nonnegative integer k, does x have some property that depends on k? For instance, for the vertex cover problem, the parameter can be the number of vertices in the cover. In many applications, for example when modelling error correction, one can assume the parameter to be "small" compared to the total input size. Then it is interesting to see whether we can find an algorithm which is exponential only in k, and not in the input size.

In this way, parameterized complexity can be seen as two-dimensional complexity theory. This concept is formalized as follows:

A parameterized problem is a language, where is a finite alphabet. The second component is called the parameter of the problem.
A parameterized problem is fixed-parameter tractable if the question “?” can be decided in running time, where is an arbitrary function depending only on . The corresponding complexity class is called FPT.

For example, there is an algorithm which solves the vertex cover problem in time, where is the number of vertices and is the size of the vertex cover. This means that vertex cover is fixed-parameter tractable with the size of the solution as the parameter.

Famous quotes containing the word complexity:

    The price we pay for the complexity of life is too high. When you think of all the effort you have to put in—telephonic, technological and relational—to alter even the slightest bit of behaviour in this strange world we call social life, you are left pining for the straightforwardness of primitive peoples and their physical work.
    Jean Baudrillard (b. 1929)