**Arithmetic coding** is a form of entropy encoding used in lossless data compression. Normally, a string of characters such as the words "hello there" is represented using a fixed number of bits per character, as in the ASCII code. When a string is converted to arithmetic encoding, frequently used characters will be stored with fewer bits and not-so-frequently occurring characters will be stored with more bits, resulting in fewer bits used in total. Arithmetic coding differs from other forms of entropy encoding such as Huffman coding in that rather than separating the input into component symbols and replacing each with a code, arithmetic coding encodes the entire message into a single number, a fraction *n* where (0.0 ≤ *n* < 1.0).

Read more about Arithmetic Coding: Adaptive Arithmetic Coding, Precision and Renormalization, Arithmetic Coding As A Generalized Change of Radix, US Patents, Benchmarks and Other Technical Characteristics, Teaching Aid

### Famous quotes containing the word arithmetic:

“I hope I may claim in the present work to have made it probable that the laws of *arithmetic* are analytic judgments and consequently a priori. *Arithmetic* thus becomes simply a development of logic, and every proposition of *arithmetic* a law of logic, albeit a derivative one. To apply *arithmetic* in the physical sciences is to bring logic to bear on observed facts; calculation becomes deduction.”

—Gottlob Frege (1848–1925)