inCLASS
Username
Password
Forgot Login Info?
spacer
 

FAQ Answer

How do we maintain inter-rater reliability on a large-scale study?

We recommend that coders on the same study periodically do a joint coding session to ensure inter-rater reliability. CASTL-run studies typically double-code 15-20% of observations to check for inter-rater reliability. We also recommend taking a drift test every couple months to ensure continued reliability against the master codes.


Go back to the full list of FAQs