Statistical Consultation Line: (865) 742-7731
Accredited Professional Statistician For Hire
  • Contact Form

Inter-rater reliability

Measure the level of agreement between independent raters of an outcome

Inter-rater reliability assesses the level of agreement between independent raters on some sort of performance or outcome. With inter-rater reliability, it is important that there is a standardized and objective operational definition by which performance is assessed across the spectrum of "agreement." If raters have misconceptions or differences of opinion in relation to rating performance, then inter-rater reliability is not feasible.  

With inter-rater reliability, ratings can be made at a categorical (yes/no), ordinal (Likert-type scale), or continuous level depending upon the process of evaluation. The number of ratings taken and the number of independent raters also plays a significant role in choosing the correct test for inter-rater reliability. 

Types of inter-rater reliability

Click on the Kappa or Intraclass Correlation Coefficient button to continue.
Kappa
Intraclass Correlation Coefficient
Back to Reliability
Research Engineer Home Page

Statistician For Hire

$100.00
Buy Now

DO YOU NEED TO HIRE A STATISTICIAN?

Eric Heidel, Ph.D. will provide statistical consulting for your research study at $100/hour. Secure checkout is available with PayPal, Stripe, Venmo, and Zelle.

  • Statistical Analysis
  • Sample Size Calculations
  • Diagnostic Testing and Epidemiological Calculations
  • Psychometrics

There are two types of inter-rater reliability analysis: Kappa and Intraclass Correlation Coefficient (ICC).

Contact Dr. Eric Heidel
consultation@scalelive.com
(865) 742-7731

Copyright © 2022 Scalë. All Rights Reserved. Patent Pending.