Kahan Summation Algorithm

In numerical analysis, the Kahan summation algorithm (also known as compensated summation ) significantly reduces the numerical error in the total obtained by adding a sequence of finite precision floating point numbers, compared to the obvious approach. This is done by keeping a separate running compensation (a variable to accumulate small errors).

In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as for random inputs (the roundoff errors form a random walk). With compensated summation, the worst-case error bound is independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.

The algorithm is attributed to William Kahan. Similar, earlier techniques are, for example, Bresenham's line algorithm, keeping track of the accumulated error in integer operations (although first documented around the same time) and the Delta-sigma modulation (integrating, not just summing the error).

Read more about Kahan Summation Algorithm:  The Algorithm, Accuracy, Alternatives, Computer Languages

Famous quotes containing the word summation:

    He maintained that the case was lost or won by the time the final juror had been sworn in; his summation was set in his mind before the first witness was called. It was all in the orchestration, he claimed: in knowing how and where to pitch each and every particular argument; who to intimidate; who to trust, who to flatter and court; who to challenge; when to underplay and exactly when to let out all the stops.
    Dorothy Uhnak (b. 1933)