No Free Lunch in Search and Optimization - No Free Lunch (NFL)

No Free Lunch (NFL)

A "problem" is, more formally, an objective function that associates candidate solutions with goodness values. A search algorithm takes an objective function as input and evaluates candidate solutions one-by-one. The output of the algorithm is the sequence of observed goodness values.

Wolpert and Macready stipulate that an algorithm never reevaluates a candidate solution, and that algorithm performance is measured on outputs. For simplicity, we disallow randomness in algorithms. Under these conditions, when a search algorithm is run on every possible input, it generates each possible output exactly once. Because performance is measured on the outputs, the algorithms are indistinguishable in how often they achieve particular levels of performance.

Some measures of performance indicate how well search algorithms do at optimization of the objective function. Indeed, there seems to be no interesting application of search algorithms in the class under consideration but to optimization problems. A common performance measure is the least index of the least value in the output sequence. This is the number of evaluations required to minimize the objective function. For some algorithms, the time required to find the minimum is proportional to the number of evaluations.

The original no free lunch (NFL) theorems assume that all objective functions are equally likely to be input to search algorithms. It has since been established that there is NFL if and only if, loosely speaking, "shuffling" objective functions has no impact on their probabilities. Although this condition for NFL is physically possible, it has been argued that it certainly does not hold precisely.

The obvious interpretation of "not NFL" is "free lunch," but this is misleading. NFL is a matter of degree, not an all-or-nothing proposition. If the condition for NFL holds approximately, then all algorithms yield approximately the same results over all objective functions. Note also that "not NFL" implies only that algorithms are inequivalent overall by some measure of performance. For a performance measure of interest, algorithms may remain equivalent, or nearly so.

In theory, all algorithms perform well in optimization almost always. That is, an algorithm obtains good solutions with relatively few evaluations for almost all objective functions. The reason is that almost all objective functions exhibit a high degree of Kolmogorov randomness. This equates to extreme irregularity and unpredictability. All levels of goodness are equally represented among candidate solutions, and good solutions are scattered all about the space of candidates. A search algorithm will rarely evaluate more than a small fraction of the candidates before locating a very good solution.

In practice, almost all objective functions and algorithms are of such high Kolmogorov complexity that they cannot arise. There is more information in the typical objective function or algorithm than Seth Lloyd estimates the observable universe is capable of registering. For instance, if each candidate solution is encoded as a sequence of 300 0's and 1's, and the goodness values are 0 and 1, then most objective functions have Kolmogorov complexity of at least 2300 bits, and this is greater than Lloyd's bound of 1090 ≈ 2299 bits. It follows that not all of "no free lunch" theory applies to physical reality. In a practical sense, algorithms "small enough" for application in physical reality are superior in performance to those that are not.

Read more about this topic:  No Free Lunch In Search And Optimization

Famous quotes containing the words free and/or lunch:

    We must be free or die, who speak the tongue
    That Shakespeare spake; the faith and morals hold
    Which Milton held.
    William Wordsworth (1770–1850)

    Extreme patience and persistence are required,
    Yet everybody succeeds at this before being handed
    The surprise box lunch of the rest of his life.
    John Ashbery (b. 1927)