Max Ent

Max Ent

In Bayesian probability theory, the principle of maximum entropy is a prime doctrine. It states that, subject to precisely stated prior data, which must be a proposition that expresses testable information, the probability distribution which best represents the current state of knowledge is the one with largest information-theoretical entropy.

Let some precisely stated prior data or testable information about a probability distribution function be given. Consider the set of all trial probability distributions that encode the prior data. Of those, the one that maximizes the information entropy is the proper probability distribution under the given prior data.

Read more about Max Ent:  History, Overview, Testable Information, Justifications For The Principle of Maximum Entropy

Famous quotes containing the word max:

    What do any of us know of the private past of even the most harmless and kind-looking individuals?
    Arnold Phillips, Max Nosseck (1902–1972)