Context Mixing - Application To Data Compression

Application To Data Compression

Suppose that we are given two conditional probabilities, P(X|A) and P(X|B), and we wish to estimate P(X|A,B), the probability of event X given both conditions A and B. There is insufficient information for probability theory to give a result. In fact, it is possible to construct scenarios in which the result could be anything at all. But intuitively, we would expect the result to be some kind of average of the two.

The problem is important for data compression. In this application, A and B are contexts, X is the event that the next bit or symbol of the data to be compressed has a particular value, and P(X|A) and P(X|B) are the probability estimates by two independent models. The compression ratio depends on how closely the estimated probability approaches the true but unknown probability of event X. It is often the case that contexts A and B have occurred often enough to accurately estimate P(X|A) and P(X|B) by counting occurrences of X in each context, but the two contexts either have not occurred together frequently, or there are insufficient computing resources (time and memory) to collect statistics for the combined case.

For example, suppose that we are compressing a text file. We wish to predict whether the next character will be a linefeed, given that the previous character was a period (context A) and that the last linefeed occurred 72 characters ago (context B). Suppose that a linefeed previously occurred after 1 of the last 5 periods (P(X|A) = 1/5 = 0.2) and in 5 out of the last 10 lines at column 72 (P(X|B) = 5/10 = 0.5). How should these predictions be combined?

Two general approaches have been used, linear and logistic mixing. Linear mixing uses a weighted average of the predictions weighted by evidence. In this example, P(X|B) gets more weight than P(X|A) because P(X|B) is based on a greater number of tests. Older versions of PAQ uses this approach. Newer versions use logistic (or neural network) mixing by first transforming the predictions into the logistic domain, log(p/(1-p)) before averaging. This effectively gives greater weight to predictions near 0 or 1, in this case P(X|A). In both cases, additional weights may be given to each of the input models and adapted to favor the models that have given the most accurate predictions in the past. All but the oldest versions of PAQ use adaptive weighting.

Most context mixing compressors predict one bit of input at a time. The output probability is simply the probability that the next bit will be a 1.

Read more about this topic:  Context Mixing

Famous quotes containing the words application to, application, data and/or compression:

    “Five o’clock tea” is a phrase our “rude forefathers,” even of the last generation, would scarcely have understood, so completely is it a thing of to-day; and yet, so rapid is the March of the Mind, it has already risen into a national institution, and rivals, in its universal application to all ranks and ages, and as a specific for “all the ills that flesh is heir to,” the glorious Magna Charta.
    Lewis Carroll [Charles Lutwidge Dodgson] (1832–1898)

    It would be disingenuous, however, not to point out that some things are considered as morally certain, that is, as having sufficient certainty for application to ordinary life, even though they may be uncertain in relation to the absolute power of God.
    René Descartes (1596–1650)

    Mental health data from the 1950’s on middle-aged women showed them to be a particularly distressed group, vulnerable to depression and feelings of uselessness. This isn’t surprising. If society tells you that your main role is to be attractive to men and you are getting crow’s feet, and to be a mother to children and yours are leaving home, no wonder you are distressed.
    Grace Baruch (20th century)

    Do they [the publishers of Murphy] not understand that if the book is slightly obscure it is because it is a compression and that to compress it further can only make it more obscure?
    Samuel Beckett (1906–1989)