Source: Teaching and Teacher Education, Volume 40, (July, 2014), p. 10-21.
(Reviewed by the Portal Team)
This article examines how primary school mentor teachers made their decisions regarding teacher candidates' practicum performance.
Method
The participants were 18 primary school mentor teachers from four elementary schools in New Zealand.
Data were collected through four vignettes of fictional teacher candidates’ (TC) practicum performance provided by the mentor teachers.
Each was written as the type of summary statement that might be written on the formal evaluation of a TC’s practicum.
Furthermore, hour-long interviews took place with the participants at their schools.
The mentor teachers’ explanations for their decisions gave the authors access to their ‘cue utiltisation validities’ - how they used the cues they identified.
Within the participant group some appeared to emphasise personal attribute dimensions, other professional practice dimensions; for others it was difficult to determine a preference.
There was, however, evidence that the mentors did not emphasise one thing to the exclusion of the other cues with weaker cues being used to moderate their decisions.
Mentors differed in what they believed could be learnt when the TCs became beginning teachers.
A significant number of the mentor teachers believed that content knowledge, management strategies and aspects of the use of evidence for planning can be learnt at a later date.
Others, however, were sufficiently concerned to fail a candidate if these aspects of practice were not well demonstrated during the practicum.
Many mentors drew on their own prior experience, especially about their own practice as beginning teachers, and this became a source of variability between judges.
There were hints throughout the interviews that in general mentor teachers are reluctant to fail even imaginary TCs, describing themselves as ‘cruel’ or ‘horrible’ for suggesting fail grades. They suggest ways that deficits could be made up and they describe how they might help someone learn the things they do not know/do.
The mentors were seeing opportunities for TC learning and backed themselves to be able to teach the TCs to be effective.
Overall, the judgment-making in this study was considered, careful and reasoned e and widely variable.
These results suggest that individual judges vary in their views about what is ‘essential’ and what can be ‘excused’ or ‘fixed later’.
There was also some evidence of internal dissensus for individual mentors, leading to confusion around assessment of TC practice.
These findings have shown that an apparently simple consensus reflected in generalised graduating teacher standards statements does not reflect the complexity of the judgment context. Agreement is only one possible outcome of an interaction when people engage in discussion with the aim of understanding others’ perspectives on, in this case, a TC’s readiness to teach.
However, the discussants should be able to confidently defend their position even if others do not agree with them.
Discussants could explore differences, consider other’s opinions, and see these as positive rather than negative.
Hence, the authors argue that such dissensus is inevitable in complex social decision-making and therefore needs to be used productively to help make more reliable judgments.
There are implications from this variability for TCs and the profession at large.
The authors argue that dissension between those charged with assessing TCs’ readiness to teach is not necessarily negative, and can be framed as potentially opening opportunities for professional growth if collaborative approaches to evaluation are taken.
Historical triadic assessment processes were also predicated on the assumption that consensus would be reached by the triad.
This study indicates that this collaborative decision-making approach could be strengthened by including additional professionals such as the principal and other teaching team members and by building in opportunities for rich productive discussion that acknowledge and explore dissensus among the judges.
Such discussions will explore the reasons for the different decisions and address sources of variability within judges, for example mentors’ prior experiences.
Additionally, the assessment needs to take place over an extended period of time, allowing for early formative feedback and final summative decisions.