Leveraging Data Sampling and Practical Knowledge: Field Instructors’ Perceptions About Inter-Rater Reliability Data

Published: 
Feb. 24, 2014

Source: Action in Teacher Education, Volume 36, Issue 1, 2014, pages 20-44

Education school administrators use inter-rater reliability analyses to judge credibility of student–teacher assessments for accreditation and programmatic decision making.

This study examined the attitudes of field instructors regarding inter-rater reliability analyses.
Quantitative analysis of 230 data from four semesters, and 14 matched-pair sampling, countered weaknesses found in usual analysis methods.
The authors analyzed the discussions of the university-based field instructors about what accounted for varying correlations.
Qualitative data analysis found that 7 field instructors assumed divergent scores indicate weakness in evaluation processes and posited conflicting root causes.

The authors claim that inter-rater reliability analyses should include pair-wise sampling, so that weak and strong rates of agreement are unmasked and opportunities for meaningful data conversations are possible.

Updated: Dec. 23, 2014
Print
Comment

Share: