Inductive Bias

The inductive bias of a learning algorithm is the set of assumptions that the learner uses to predict outputs given inputs that it has not encountered (Mitchell, 1980).

In machine learning, one aims to construct algorithms that are able to learn to predict a certain target output. To achieve this, the learning algorithm is presented some training examples that demonstrate the intended relation of input and output values. Then the learner is supposed to approximate the correct output, even for examples that have not been shown during training. Without any additional assumptions, this task cannot be solved exactly since unseen situations might have an arbitrary output value. The kind of necessary assumptions about the nature of the target function are subsumed in the term inductive bias (Mitchell, 1980; desJardins and Gordon, 1995).

A classical example of an inductive bias is Occam's Razor, assuming that the simplest consistent hypothesis about the target function is actually the best. Here consistent means that the hypothesis of the learner yields correct outputs for all of the examples that have been given to the algorithm.

Approaches to a more formal definition of inductive bias are based on mathematical logic. Here, the inductive bias is a logical formula that, together with the training data, logically entails the hypothesis generated by the learner. Unfortunately, this strict formalism fails in many practical cases, where the inductive bias can only be given as a rough description (e.g. in the case of neural networks), or not at all.

Read more about Inductive Bias:  Types of Inductive Biases, Shift of Bias

Famous quotes containing the word bias:

    The solar system has no anxiety about its reputation, and the credit of truth and honesty is as safe; nor have I any fear that a skeptical bias can be given by leaning hard on the sides of fate, of practical power, or of trade, which the doctrine of Faith cannot down-weigh.
    Ralph Waldo Emerson (1803–1882)