Inter-rater Reliability - The Philosophy of Inter-rater Agreement

The Philosophy of Inter-rater Agreement

There are several operational definitions of "inter-rater reliability" in use by Examination Boards, reflecting different viewpoints about what is reliable agreement between raters.

There are three operational definitions of agreement:

1. Reliable raters agree with the "official" rating of a performance.

2. Reliable raters agree with each other about the exact ratings to be awarded.

3. Reliable raters agree about which performance is better and which is worse.

These combine with two operational definitions of behavior:

A. Reliable raters are automatons, behaving like "rating machines". This category includes rating of essays by computer . This behavior can be evaluated by Generalizability theory.

B. Reliable raters behave like independent witnesses. They demonstrate their independence by disagreeing slightly. This behavior can be evaluated by the Rasch model.

Read more about this topic:  Inter-rater Reliability

Famous quotes containing the words philosophy and/or agreement:

    Why it was that upon this beautiful feminine tissue, sensitive as gossamer, and practically blank as snow as yet, there should have been traced such a coarse pattern as it was doomed to receive; why so often the coarse appropriates the finer thus, the wrong man the woman, the wrong women the man, many years of analytical philosophy have failed to explain to our sense of order.
    Thomas Hardy (1840–1928)

    No one can doubt, that the convention for the distinction of property, and for the stability of possession, is of all circumstances the most necessary to the establishment of human society, and that after the agreement for the fixing and observing of this rule, there remains little or nothing to be done towards settling a perfect harmony and concord.
    David Hume (1711–1776)