Inter-rater reliability
Measure the level of agreement between independent raters of an outcome
Inter-rater reliability assesses the level of agreement between independent raters on some sort of performance or outcome. With inter-rater reliability, it is important that there is a standardized and objective operational definition by which performance is assessed across the spectrum of "agreement." If raters have misconceptions or differences of opinion in relation to rating performance, then inter-rater reliability is not feasible.
With inter-rater reliability, ratings can be made at a categorical (yes/no), ordinal (Likert-type scale), or continuous level depending upon the process of evaluation. The number of ratings taken and the number of independent raters also plays a significant role in choosing the correct test for inter-rater reliability.
With inter-rater reliability, ratings can be made at a categorical (yes/no), ordinal (Likert-type scale), or continuous level depending upon the process of evaluation. The number of ratings taken and the number of independent raters also plays a significant role in choosing the correct test for inter-rater reliability.
Types of inter-rater reliability
Click on the Kappa or Intraclass Correlation Coefficient button to continue.
Statistician For Hire
DO YOU NEED TO HIRE A STATISTICIAN?
Eric Heidel, Ph.D. will provide statistical consulting for your research study at $100/hour. Secure checkout is available with PayPal, Stripe, Venmo, and Zelle.
- Statistical Analysis
- Sample Size Calculations
- Diagnostic Testing and Epidemiological Calculations
- Psychometrics