When the number of moment conditions is greater than the dimension of the parameter vector θ, the model is said to be over-identified. Over-identification allows us to check whether the model's moment conditions match the data well or not.
Conceptually we can check whether is sufficiently close to zero to suggest that the model fits the data well. The GMM method has then replaced the problem of solving the equation, which chooses to match the restrictions exactly, by a minimization calculation. The minimization can always be conducted even when no exists such that . This is what J-test does. The J-test is also called a test for over-identifying restrictions.
Formally we consider two hypotheses:
- (the null hypothesis that the model is “valid”), and
- (the alternative hypothesis that model is “invalid”; the data do not come close to meeting the restrictions)
Under hypothesis, the following so-called J-statistic is asymptotically chi-squared with k–l degrees of freedom. Define J to be:
where is the GMM estimator of the parameter, k is the number of moment conditions (dimension of vector g), and l is the number of estimated parameters (dimension of vector θ). Matrix must converge in probability to, the efficient weighting matrix (note that previously we only required that W be proportional to for estimator to be efficient; however in order to conduct the J-test W must be exactly equal to, not simply proportional).
Under the alternative hypothesis, the J-statistic is asymptotically unbounded:
To conduct the test we compute the value of J from the data. It is a nonnegative number. We compare it with (say) the 0.95 quantile of the distribution:
- is rejected at 95% confidence level if
- cannot be rejected at 95% confidence level if
Read more about this topic: Generalized Method Of Moments