Minimax Estimator - Least Favorable Distribution

Least Favorable Distribution

Logically, an estimator is minimax when it is the best in the worst case. Continuing this logic, a minimax estimator should be a Bayes estimator with respect to a prior least favorable distribution of . To demonstrate this notion denote the average risk of the Bayes estimator with respect to a prior distribution as

Definition: A prior distribution is called least favorable if for any other distribution the average risk satisfies .

Theorem 1: If then:

  1. is minimax.
  2. If is a unique Bayes estimator, it is also the unique minimax estimator.
  3. is least favorable.

Corollary: If a Bayes estimator has constant risk, it is minimax. Note that this is not a necessary condition.

Example 1, Unfair coin: Consider the problem of estimating the "success" rate of a Binomial variable, . This may be viewed as estimating the rate at which an unfair coin falls on "heads" or "tails". In this case the Bayes estimator with respect to a Beta-distributed prior, is

with constant Bayes risk

and, according to the Corollary, is minimax.

Definition: A sequence of prior distributions is called least favorable if for any other distribution ,

Theorem 2: If there are a sequence of priors and an estimator such that, then :

  1. is minimax.
  2. The sequence is least favorable.

Notice that no uniqueness is guaranteed here. For example, the ML estimator from the previous example may be attained as the limit of Bayes estimators with respect to a uniform prior, with increasing support and also with respect to a zero mean normal prior with increasing variance. So neither the resulting ML estimator is unique minimax not the least favorable prior is unique.

Example 2: Consider the problem of estimating the mean of dimensional Gaussian random vector, . The Maximum likelihood (ML) estimator for in this case is simply, and it risk is

The risk is constant, but the ML estimator is actually not a Bayes estimator, so the Corollary of Theorem 1 does not apply. However, the ML estimator is the limit of the Bayes estimators with respect to the prior sequence, and, hence, indeed minimax according to Theorem 2 . Nonetheless, minimaxity does not always imply admissibility. In fact in this example, the ML estimator is known to be inadmissible (not admissible) whenever . The famous James–Stein estimator dominates the ML whenever . Though both estimators have the same risk when, and they are both minimax, the James–Stein estimator has smaller risk for any finite . This fact is illustrated in the following figure.

Read more about this topic:  Minimax Estimator

Famous quotes containing the words favorable and/or distribution:

    ... imprisonment itself, entailing loss of liberty, loss of citizenship, separation from family and loved ones, is punishment enough for most individuals, no matter how favorable the circumstances under which the time is passed.
    Mary B. Harris (1874–1957)

    The question for the country now is how to secure a more equal distribution of property among the people. There can be no republican institutions with vast masses of property permanently in a few hands, and large masses of voters without property.... Let no man get by inheritance, or by will, more than will produce at four per cent interest an income ... of fifteen thousand dollars] per year, or an estate of five hundred thousand dollars.
    Rutherford Birchard Hayes (1822–1893)