Inter-rater Reliability - The Philosophy of Inter-rater Agreement

The Philosophy of Inter-rater Agreement

There are several operational definitions of "inter-rater reliability" in use by Examination Boards, reflecting different viewpoints about what is reliable agreement between raters.

There are three operational definitions of agreement:

1. Reliable raters agree with the "official" rating of a performance.

2. Reliable raters agree with each other about the exact ratings to be awarded.

3. Reliable raters agree about which performance is better and which is worse.

These combine with two operational definitions of behavior:

A. Reliable raters are automatons, behaving like "rating machines". This category includes rating of essays by computer . This behavior can be evaluated by Generalizability theory.

B. Reliable raters behave like independent witnesses. They demonstrate their independence by disagreeing slightly. This behavior can be evaluated by the Rasch model.

Read more about this topic:  Inter-rater Reliability

Famous quotes containing the words philosophy and/or agreement:

    It is not easy to make our lives respectable by any course of activity. We must repeatedly withdraw into our shells of thought, like the tortoise, somewhat helplessly; yet there is more than philosophy in that.
    Henry David Thoreau (1817–1862)

    The methodological advice to interpret in a way that optimizes agreement should not be conceived as resting on a charitable assumption about human intelligence that might turn out to be false. If we cannot find a way to interpret the utterances and other behaviour of a creature as revealing a set of beliefs largely consistent and true by our standards, we have no reason to count that creature as rational, as having beliefs, or as saying anything.
    Donald Davidson (b. 1917)