Random Walk Algorithms
Many Markov chain Monte Carlo methods move around the equilibrium distribution in relatively small steps, with no tendency for the steps to proceed in the same direction. These methods are easy to implement and analyze, but unfortunately it can take a long time for the walker to explore all of the space. The walker will often double back and cover ground already covered. Here are some random walk MCMC methods:
- Metropolis–Hastings algorithm: Generates a random walk using a proposal density and a method for rejecting proposed moves.
- Gibbs sampling: Requires that all the conditional distributions of the target distribution can be sampled exactly. Popular partly because when this is so, the method does not require any 'tuning'.
- Slice sampling: Depends on the principle that one can sample from a distribution by sampling uniformly from the region under the plot of its density function. This method alternates uniform sampling in the vertical direction with uniform sampling from the horizontal 'slice' defined by the current vertical position.
- Multiple-try Metropolis: A variation of the Metropolis–Hastings algorithm that allows multiple trials at each point. This allows the algorithm to generally take larger steps at each iteration, which helps combat problems intrinsic to large dimensional problems.
Read more about this topic: Markov Chain Monte Carlo
Famous quotes containing the words random and/or walk:
“poor Felix Randal;
How far from then forethought of, all thy more boisterous years,
When thou at the random grim forge, powerful amidst peers,
Didst fettle for the great gray drayhorse his bright and battering
sandal!”
—Gerard Manley Hopkins (18441889)
“In song and dance man expresses himself as a member of a higher community: he has forgotten how to walk and speak and is on the way toward flying up into the air, dancing.”
—Friedrich Nietzsche (18441900)