Maximum Entropy Thermodynamics - Maximum Shannon Entropy

Maximum Shannon Entropy

Central to the MaxEnt thesis is the principle of maximum entropy, which states that given certain "testable information" about a probability distribution, for example particular expectation values, but which is not in itself sufficient to uniquely determine the distribution, one should prefer the distribution which maximizes the Shannon information entropy.

This is known as the Gibbs algorithm, having been introduced by J. Willard Gibbs in 1878, to set up statistical ensembles to predict the properties of thermodynamic systems at equilibrium. It is the cornerstone of the statistical mechanical analysis of the thermodynamic properties of equilibrium systems (see partition function).

A direct connection is thus made between the equilibrium thermodynamic entropy STh, a state function of pressure, volume, temperature, etc., and the information entropy for the predicted distribution with maximum uncertainty conditioned only on the expectation values of those variables:

kB, Boltzmann's constant, has no fundamental physical significance here, but is necessary to retain consistency with the previous historical definition of entropy by Clausius (1865) (see Boltzmann's constant).

However, the MaxEnt school argue that the MaxEnt approach is a general technique of statistical inference, with applications far beyond this. It can therefore also be used to predict a distribution for "trajectories" Γ "over a period of time" by maximising:

This "information entropy" does not necessarily have a simple correspondence with thermodynamic entropy. But it can be used to predict features of nonequilibrium thermodynamic systems as they evolve over time.

In the field of near-equilibrium thermodynamics, the Onsager reciprocal relations and the Green-Kubo relations fall out very directly. The approach also creates a solid theoretical framework for the study of far-from-equilibrium thermodynamics, making the derivation of the entropy production fluctuation theorem particularly straightforward. Practical calculations for most far-from-equilibrium systems remain very challenging, however.

Technical note: For the reasons discussed in the article differential entropy, the simple definition of Shannon entropy ceases to be directly applicable for random variables with continuous probability distribution functions. Instead the appropriate quantity to maximise is the "relative information entropy,"

Hc is the negative of the Kullback-Leibler divergence, or discrimination information, of m(x) from p(x), where m(x) is a prior invariant measure for the variable(s). The relative entropy Hc is always less than zero, and can be thought of as (the negative of) the number of bits of uncertainty lost by fixing on p(x) rather than m(x). Unlike the Shannon entropy, the relative entropy Hc has the advantage of remaining finite and well-defined for continuous x, and invariant under 1-to-1 coordinate transformations. The two expressions coincide for discrete probability distributions, if one can make the assumption that m(xi) is uniform - i.e. the principle of equal a-priori probability, which underlies statistical thermodynamics.

Read more about this topic:  Maximum Entropy Thermodynamics

Famous quotes containing the words maximum and/or entropy:

    I had a quick grasp of the secret to sanity—it had become the ability to hold the maximum of impossible combinations in one’s mind.
    Norman Mailer (b. 1923)

    Just as the constant increase of entropy is the basic law of the universe, so it is the basic law of life to be ever more highly structured and to struggle against entropy.
    Václav Havel (b. 1936)