Bayes Factor - Interpretation

Interpretation

A value of K > 1 means that M1 is more strongly supported by the data under consideration than M2. Note that classical hypothesis testing gives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidence against it. Harold Jeffreys gave a scale for interpretation of K:

K dB bits Strength of evidence
< 1:1
< 0
Negative (supports M2)
1:1 to 3:1
0 to 5
0 to 1.6
Barely worth mentioning
3:1 to 10:1
5 to 10
1.6 to 3.3
Substantial
10:1 to 30:1
10 to 15
3.3 to 5.0
Strong
30:1 to 100:1
15 to 20
5.0 to 6.6
Very strong
> 100:1
> 20
> 6.6
Decisive

The second column gives the corresponding weights of evidence in decibans (tenths of a power of 10); bits are added in the third column for clarity. According to I. J. Good a change in a weight of evidence of 1 deciban or 1/3 of a bit (i.e. a change in an odds ratio from evens to about 5:4) is about as finely as humans can reasonably perceive their degree of belief in a hypothesis in everyday use.

The use of Bayes factors or classical hypothesis testing takes place in the context of inference rather than decision-making under uncertainty. That is, we merely wish to find out which hypothesis is true, rather than actually making a decision on the basis of this information. Frequentist statistics draws a strong distinction between these two because classical hypothesis tests are not coherent in the Bayesian sense. Bayesian procedures, including Bayes factors, are coherent, so there is no need to draw such a distinction. Inference is then simply regarded as a special case of decision-making under uncertainty in which the resulting action is to report a value. For decision-making, Bayesian statisticians might use a Bayes factor combined with a prior distribution and a loss function associated with making the wrong choice. In an inference context the loss function would take the form of a scoring rule. Use of a logarithmic score function for example, leads to the expected utility taking the form of the Kullback–Leibler divergence.

Read more about this topic:  Bayes Factor