Approximation Error Bounds of Quasi-Monte Carlo
The approximation error of the quasi-Monte Carlo method is bounded by a term proportional to the discrepancy of the set x1, ..., xN. Specifically, the Koksma-Hlawka inequality states that the error
is bounded by
- ,
where V(f) is the Hardy-Krause variation of the function f (see Morokoff and Caflisch (1995) for the detailed definitions). DN is the discrepancy of the set (x1,...,xN) and is defined as
- ,
where Q is a rectangular solid in s with sides parallel to the coordinate axes. The inequality can be used to show that the error of the approximation by the quasi-Monte Carlo method is, whereas the Monte Carlo method has a probabilistic error of . Though we can only state the upper bound of the approximation error, the convergence rate of quasi-Monte Carlo method in practice is usually much faster than its theoretical bound. Hence, in general, the accuracy of the quasi-Monte Carlo method increases faster than that of the Monte Carlo method.
Read more about this topic: Quasi-Monte Carlo Method
Famous quotes containing the words error, bounds and/or carlo:
“An error the breadth of a single hair can lead one a thousand miles astray.”
—Chinese proverb.
“Great Wits are sure to Madness near allid
And thin Partitions do their Bounds divide;
Else, why should he, with Wealth and Honour blest,
Refuse his Age the needful hours of Rest?”
—John Dryden (16311700)
“If there is anything so romantic as that castle-palace-fortress of Monaco I have not seen it. If there is anything more delicious than the lovely terraces and villas of Monte Carlo I do not wish to see them. There is nothing beyond the semi-tropical vegetation, the projecting promontories into the Mediterranean, the all-embracing sweep of the ocean, the olive groves, and the enchanting climate! One gets tired of the word beautiful.”
—M. E. W. Sherwood (18261903)