Random Forest - Learning Algorithm

Learning Algorithm

Each tree is constructed using the following algorithm:

  1. Let the number of training cases be N, and the number of variables in the classifier be M.
  2. We are told the number m of input variables to be used to determine the decision at a node of the tree; m should be much less than M.
  3. Choose a training set for this tree by choosing n times with replacement from all N available training cases (i.e. take a bootstrap sample). Use the rest of the cases to estimate the error of the tree, by predicting their classes.
  4. For each node of the tree, randomly choose m variables on which to base the decision at that node. Calculate the best split based on these m variables in the training set.
  5. Each tree is fully grown and not pruned (as may be done in constructing a normal tree classifier).

For prediction a new sample is pushed down the tree. It is assigned the label of the training sample in the terminal node it ends up in. This procedure is iterated over all trees in the ensemble, and the mode vote of all trees is reported as random forest prediction.

Read more about this topic:  Random Forest

Famous quotes containing the word learning:

    Ordinary time is “quality time” too. Everyday activities are not just necessities that keep you from serious child rearing: they are the best opportunities for learning you can give your child...because her chief task in her first three years is precisely to gain command of the day-to-day life you take for granted.
    Amy Laura Dombro (20th century)