Lost in Translation: Using Video Annotation Software to Examine How A Clinical Supervisor Interprets and Applies A State-Mandated Teacher Assessment Instrument

Oct. 10, 2009

Source: The Teacher Educator, 44(4): 217–231, 2009.
(Reviewed by the Portal Team)

This case study examines the reasoning of a clinical supervisor as she assesses preservice teacher candidates with a state-mandated performance assessment instrument.
The supervisor’s evaluations were recorded using video annotation software, which allowed her to record her observations in real-time. The resulting annotations included the original teaching videotape, plus a ‘‘commentary track’’ that followed the action in real-time. The study reveals some of the inherent challenges in clinical supervision and the use of a state’s mandated performance rubrics to evaluate teacher competencies. In data analysis the authors explored the following research questions:
How does a clinical supervisor use a mandated performance assessment instrument to evaluate teacher candidates?
On what basis does she make her evaluative judgments and how does she support her assertions?


This case study took place during a 10-week student teaching internship in an undergraduate teacher education program at a small university in the Pacific Northwest. A clinical supervisor—‘‘Felicia’’—was asked to annotate the teaching videotapes of three preservice teachers. Felicia had spent her career as a special education teacher and administrator for upper elementary school and middle school, and in retirement was supervising teacher candidates.

Methods and Data Sources

The authors used qualitative research methodologies to investigate how the clinical supervisor used video annotation software as she evaluated the performance of three student teaching interns. Data sources for this study included digital videotapes of the teacher candidates’ lessons, the accompanying annotations created by the clinical supervisor as she viewed the candidates’ teaching videotapes, and a semi-structured interview after the annotations were completed.


Findings indicate that the clinical supervisor found it difficult to interpret rubric criteria, often made tenuous claims about candidates’ performance, and tended to require students to design lessons that were artificial demonstrations of mandated competencies. Findings also suggest that the difficulties faced by the clinical supervisor were likely connected to inadequate professional development regarding the use of the state-mandated performance assessment instrument.

The article concludes with a discussion of the need for better professional
development for clinical supervisors given their important role in the professional
development of tomorrow’s teachers and suggests other areas for future research.

Updated: Oct. 21, 2009