Fisher Information

In mathematical statistics and information theory, the Fisher information (sometimes simply called information) can be defined as the variance of the score, or as the expected value of the observed information. In Bayesian statistics, the asymptotic distribution of the posterior mode depends on the Fisher information and not on the prior (according to the Bernstein–von Mises theorem, which was anticipated by Laplace for exponential families). The role of the Fisher information in the asymptotic theory of maximum-likelihood estimation was emphasized by the statistician R.A. Fisher (following some initial results by F. Y. Edgeworth). The Fisher information is also used in the calculation of the Jeffreys prior, which is used in Bayesian statistics.

The Fisher-information matrix is used to calculate the covariance matrices associated with maximum-likelihood estimates. It can also be used in the formulation of test statistics, such as the Wald test.

Read more about Fisher Information:  History, Definition, Matrix Form, Distinction From The Hessian of The Entropy

Famous quotes containing the words fisher and/or information:

    ... most bereaved souls crave nourishment more tangible than prayers: they want a steak. What is more, they need a steak. Preferably they need it rare, grilled, heavily salted, for that way it is most easily digested, and most quickly turned into the glandular whip their tired adrenals cry for.
    —M.F.K. Fisher (1908–1992)

    So while it is true that children are exposed to more information and a greater variety of experiences than were children of the past, it does not follow that they automatically become more sophisticated. We always know much more than we understand, and with the torrent of information to which young people are exposed, the gap between knowing and understanding, between experience and learning, has become even greater than it was in the past.
    David Elkind (20th century)