Markov Chain Monte Carlo - Random Walk Algorithms

Random Walk Algorithms

Many Markov chain Monte Carlo methods move around the equilibrium distribution in relatively small steps, with no tendency for the steps to proceed in the same direction. These methods are easy to implement and analyze, but unfortunately it can take a long time for the walker to explore all of the space. The walker will often double back and cover ground already covered. Here are some random walk MCMC methods:

  • Metropolis–Hastings algorithm: Generates a random walk using a proposal density and a method for rejecting proposed moves.
  • Gibbs sampling: Requires that all the conditional distributions of the target distribution can be sampled exactly. Popular partly because when this is so, the method does not require any 'tuning'.
  • Slice sampling: Depends on the principle that one can sample from a distribution by sampling uniformly from the region under the plot of its density function. This method alternates uniform sampling in the vertical direction with uniform sampling from the horizontal 'slice' defined by the current vertical position.
  • Multiple-try Metropolis: A variation of the Metropolis–Hastings algorithm that allows multiple trials at each point. This allows the algorithm to generally take larger steps at each iteration, which helps combat problems intrinsic to large dimensional problems.

Read more about this topic:  Markov Chain Monte Carlo

Famous quotes containing the words random and/or walk:

    Assemble, first, all casual bits and scraps
    That may shake down into a world perhaps;
    People this world, by chance created so,
    With random persons whom you do not know—
    Robert Graves (1895–1985)

    With four walk-in closets to walk in,
    Three bushes, two shrubs, and one tree,
    The suburbs are good for the children,
    But no place for grown-ups to be.
    Judith Viorst (b. 1935)