Evaluating Special Education Teachers’ Classroom Performance: Rater Reliability at the Rubric Item Level

Countries: 
Published: 
Nov. 01, 2018

Source: Teacher Education and Special Education, Volume: 41 issue: 4, page(s): 263-276
 (Reviewed by the Portal Team)


The purpose of this study was to examine both the rubric items and raters in the context of special education teachers’ classroom observations.
In this study, the authors asked both school administrators and peer raters to score special education teachers’ classroom instruction with rubric items designed to reflect evidence based practices for teaching students with disabilities so that they could better investigate potential limitations school administrators may have in understanding and recognizing the type of instruction they should be seeing in a special educator’s classroom.
They also compared administrators’ scores with those of peer raters, who are arguably more experienced with and have more knowledge regarding teaching students with disabilities.

Method
 

Teacher Participants

A total of 19 special education teachers from urban and suburban schools in California and Idaho participated in the study.
Participants from both California and Idaho possessed valid special education teacher credentials in their respective states, were delivering instruction to students with disabilities in a classroom setting at the time of data collection, and were working predominately with students with mild to moderate disabilities.
 

Rater Participants
Special educator peer raters - Five special education teachers participated as raters in two sessions to score their peers’ video recorded instructional lessons.
The peer raters represented a range of experience in various contents, settings, and grade levels.
Administrator raters - School administrators without prior experience in special education were recruited as raters of the special education teachers’ video-recorded classroom lessons. They were required to have experience as school principals or assistant principals as those roles represent the professionals who typically perform teacher evaluations.
The participant raters were required to have at least 5 years of experience within the field of school administration and at least 1 year of experience evaluating teacher performance using observation rubrics.
A total of three school administrators from California consented to and participated in the instructional video rating session.
Rubric items - This study included seven specific rubric items that best aligned with a proposed definition of effective special education teaching proffered by Jones and Brownell (2013).

Findings
In a study comparing school administrators’ and peers’ ratings of general education teachers’ instruction, Ho and Kane (2013) found that administrators were more reliable than peer raters. In the present study, the authors found that the peer raters were far more reliable than administrators, suggesting that knowledge of special education instructional practices may affect rater scores.
Although the peers in this study were experienced special education teachers who received training and education specific to teaching students with disabilities, the school administrators’ knowledge of special education came solely via exposure on their respective campuses.
According to the authors, a lack of formal training and education in special education may have hindered the administrators’ ability to produce reliable scores on the teacher makes effective use of time, the teacher appears to have a solid understanding of the content, the teacher implements effective instructional practices, and the teacher effectively responds to student needs items.
 

Future Directions
The authors note that the administrators in this study showed some promise in their ability to score reliably, though their performance varied depending on the rubric item.
For school administrators without experience or expertise in special education, using rubric items designed for general educators may be problematic when applied to special educators.
They suggest that it would be helpful to define dimensions common to all teachers (e.g., articulating a lesson objective), and to distinguish those that may be unique to special education (e.g., individualized instruction).
In doing so, some rubric items from commonly used instruments could be maintained, while supplemental items could be introduced for special educators’ observations and evaluations.

References

Ho, A. D., & Kane, T. J. (2013). The reliability of classroom observations by school personnel.

Jones, N. D., & Brownell, M. T. (2013). Examining the use of classroom observations in the evaluation of special education teachers. Assessment for Effective Intervention, 39, 112-124. doi:10.1177/1534508413514103 

Updated: Jul. 25, 2019
Print
Comment

Share: