Markov Chain Monte Carlo - Random Walk Algorithms

Random Walk Algorithms

Many Markov chain Monte Carlo methods move around the equilibrium distribution in relatively small steps, with no tendency for the steps to proceed in the same direction. These methods are easy to implement and analyze, but unfortunately it can take a long time for the walker to explore all of the space. The walker will often double back and cover ground already covered. Here are some random walk MCMC methods:

  • Metropolis–Hastings algorithm: Generates a random walk using a proposal density and a method for rejecting proposed moves.
  • Gibbs sampling: Requires that all the conditional distributions of the target distribution can be sampled exactly. Popular partly because when this is so, the method does not require any 'tuning'.
  • Slice sampling: Depends on the principle that one can sample from a distribution by sampling uniformly from the region under the plot of its density function. This method alternates uniform sampling in the vertical direction with uniform sampling from the horizontal 'slice' defined by the current vertical position.
  • Multiple-try Metropolis: A variation of the Metropolis–Hastings algorithm that allows multiple trials at each point. This allows the algorithm to generally take larger steps at each iteration, which helps combat problems intrinsic to large dimensional problems.

Read more about this topic:  Markov Chain Monte Carlo

Famous quotes containing the words random and/or walk:

    ... the random talk of people who have no chance of immortality and thus can speak their minds out has a setting, often, of lights, streets, houses, human beings, beautiful or grotesque, which will weave itself into the moment for ever.
    Virginia Woolf (1882–1941)

    Harry Morgan: Walk around me. No, go ahead. Walk around me, clear around. Do ya find anything?
    Slim/Marie Brown: No, no Steve. There are no strings tied to you. Not yet.
    Jules Furthman (1888–1960)