Inter-rater Reliability - The Philosophy of Inter-rater Agreement

The Philosophy of Inter-rater Agreement

There are several operational definitions of "inter-rater reliability" in use by Examination Boards, reflecting different viewpoints about what is reliable agreement between raters.

There are three operational definitions of agreement:

1. Reliable raters agree with the "official" rating of a performance.

2. Reliable raters agree with each other about the exact ratings to be awarded.

3. Reliable raters agree about which performance is better and which is worse.

These combine with two operational definitions of behavior:

A. Reliable raters are automatons, behaving like "rating machines". This category includes rating of essays by computer . This behavior can be evaluated by Generalizability theory.

B. Reliable raters behave like independent witnesses. They demonstrate their independence by disagreeing slightly. This behavior can be evaluated by the Rasch model.

Read more about this topic:  Inter-rater Reliability

Famous quotes containing the words philosophy and/or agreement:

    A cosmic philosophy is not constructed to fit a man; a cosmic philosophy is constructed to fit a cosmos. A man can no more possess a private religion than he can possess a private sun and moon.
    Gilbert Keith Chesterton (1874–1936)

    There’s nothing is this world more instinctively abhorrent to me than finding myself in agreement with my fellow-humans.
    Malcolm Muggeridge (1903–1990)