Inter-rater Reliability - The Philosophy of Inter-rater Agreement

The Philosophy of Inter-rater Agreement

There are several operational definitions of "inter-rater reliability" in use by Examination Boards, reflecting different viewpoints about what is reliable agreement between raters.

There are three operational definitions of agreement:

1. Reliable raters agree with the "official" rating of a performance.

2. Reliable raters agree with each other about the exact ratings to be awarded.

3. Reliable raters agree about which performance is better and which is worse.

These combine with two operational definitions of behavior:

A. Reliable raters are automatons, behaving like "rating machines". This category includes rating of essays by computer . This behavior can be evaluated by Generalizability theory.

B. Reliable raters behave like independent witnesses. They demonstrate their independence by disagreeing slightly. This behavior can be evaluated by the Rasch model.

Read more about this topic:  Inter-rater Reliability

Famous quotes containing the words philosophy and/or agreement:

    Methinks it would be some advantage to philosophy if men were named merely in the gross, as they are known. It would be necessary only to know the genus and perhaps the race or variety, to know the individual. We are not prepared to believe that every private soldier in a Roman army had a name of his own,—because we have not supposed that he had a character of his own.
    Henry David Thoreau (1817–1862)

    Culture is the tacit agreement to let the means of subsistence disappear behind the purpose of existence. Civilization is the subordination of the latter to the former.
    Karl Kraus (1874–1936)