Source: Journal of Education for Teaching, 45:5, 567-584
(Reviewed by the Portal Team)
This study was designed to explore the use of video in developing pre-service teachers’ professional vision by examining Czech pre-service teachers’ shifts of focus in response to different kinds of video, in the context of their disciplines. In particular, it was intended to focus on one main research question and one subsidiary one.
First, does the source of a video (coming from their own teaching, or from other teachers) have any impact on the development of professional vision amongst pre-service teachers on a video intervention programme? Second, does the development of professional vision vary by subject?
The participants were pre-service teachers from one faculty in the Czech Republic.
Two groups were future elementary teachers (educating pupils aged 6 to 12) in their 4th year of a 5-year combined bachelor’s and master’s degree: one focused on social sciences and one on art. Two groups consisted of future lower and upper secondary teachers (for pupils aged 12 to 19) in the first semester of a two-year master’s degree: one specialising in mathematics and the other in biology.
Participants who chose the new video module were randomly assigned to intervention groups (‘own’ or ‘public’ video), and students were randomly selected from the students in other modules to form a comparison group.
Each group consisted of four students (except for the two intervention art groups and the public video biology group, which each had five).
There were 17 participants in the ‘own video’ group, 18 in the ‘public video’ group and 16 in the comparison group.
The structure of the video interventions was designed closely to mirror the approach in Simpson, Vondrova, and Zalska (2018).
It was based on situated cognition learning theory: whole lessons and shorter clips were used, with an emphasis on cooperative learning through in-person and offline opportunities for discussion.
A virtual learning environment (VLE) module workshop was used: participants wrote answers to a task first and then were given their peers’ answers to comment on.
The video interventions spread across the semester depending on each group’s time constraints.
Each session consisted of three 45-min lessons led by the usual course tutor (their subject education teacher).
Before each session, participants in the intervention groups were asked to view and comment on whole videos; in each session, this was discussed, and shorter clips were used.
The key difference was the choice of video: in the ‘public video’ group, the instructor selected from widely available videos.
In the ‘own video’ group, the instructor chose from the videos of lessons conducted by group participants.
Measuring professional vision development
Before and after the intervention, participants were asked to watch a public video of a lesson from their subject and write a commentary.
The tasks were completed at home with answers submitted through the VLE. Participants were aware that no module marks were assigned for these tasks.
Analysis of data
Similarly to Simpson, Vondrová, and Žalská (2018), the authors used probably the most widely adopted and elaborated framework: that developed by van Es and Sherin (2010).
Each response was assigned an identifier (by an assistant external to the coding team), so coders were blind to group and stage.
It was divided into units of analysis, each representing an observation making sense on its own: this might be a whole sentence or separate phrases (if they contained a shift in attention).
Each unit was assigned a code for each dimension using a coding manual detailed in Simpson, Vondrová, and Žalská (2018).
Discussion and conclusions
The subsidiary research question asked whether professional vision varied by subject.
This picture is mixed: there were apparently some key differences for the elementary preservice teachers, specialising in art: they tended to be more specific, saying more about the subject and classroom management, less about pedagogy.
There were less stark differences with secondary biology participants who were slightly more evaluative and wrote less about students.
The authors note that these results should be treated with caution.
One cannot disaggregate participants’ subject from the content of the videos they watched: understandably, art participants watched art lessons, mathematics participants watched mathematics lessons and so on.
It cannot be ruled out that the art videos gave less opportunity to talk about pedagogy than the mathematics videos, rather than that the art participants’ professional vision is different from the mathematics participants’ vision.
However, the authors note that RQ2 was a subsidiary question and rendered somewhat moot by the result of their main question concerning whether different sources of the videos (‘own’ or ‘public’) were associated with different changes in professional vision.
There were no effects for different intervention types nor, more importantly, for pre- and post-intervention (except the small change in ‘theorising’ which applied equally to the comparison group).
Thus, before discussing whether the participants’ development of professional vision varied by video type, it must be asked whether there was any noticeable development at all.
That does not appear to be the case.
This starkly contrasts with much literature motivating this study, which showed consistent movement away from participants writing about themselves and from evaluating towards writing specifically about subject and pupils.
One might question whether there was a similar tendency here, but it was one that did not reach statistical significance.
This is not supported by the data.
The authors suggest that they could thus think of the study as a divergent replication, at odds with existing research.
It is important to note that this study was not designed as a replication; the design assumed that, as previous research showed, the intervention would lead to clear change.
The research question was focussed on how that change might be moderated by the type of video.
Given the study closely followed the methods of Simpson, Vondrová, and Žalská (2018) (itself grounded on the methods in the literature following the work of Sherin and van Es), explanations for the divergence of these results are unlikely to lie in differences in process or task design.
The authors suggest that they should consider the consequences of the study as if it were a traditional replication and consider explanations for divergence by considering other reasons for a failure to replicate results.
Put together with Star, Lynch, and Perova (2011), the authors study leads to a valuable hypothesis for further study: there may be a ‘sweet spot’ in which video interventions support the development of professional vision.
Video interventions of this type may fail when participants are underdeveloped or overdeveloped.
Star et al. argue that unless people have developed a baseline vision, they cannot build on it.
This study adds to this suggestion that there may be a limit to how video (at least, as used here) may allow further development of professional vision and, once teachers have achieved it, this type of intervention may no longer be associated with improvement.
Simpson, A., N. Vondrová, and J. Žalská. 2018. “Sources of Shifts in Pre-service Teachers’ Patterns of Attention: The Roles of Teaching Experience and of Observational Experience.” Journal of Mathematics Teacher Education 21 (6): 607–630. doi:10.1007/s10857-017-9370-6
Star, J. R., K. Lynch, and N. Perova. 2011. “Using Video to Improve Preservice Mathematics Teachers’ Abilities to Attend to Classroom Features: A Replication Study.” In Mathematics Teacher Noticing: Seeing Through Teachers’ Eyes, edited by M. G. Sherin, V. R. Jacobs, and R. A. Philipp, 117–133. New York: Taylor & Francis
van Es, E. A., and M. G. Sherin. 2010. “The Influence of Video Clubs on Teachers’ Thinking and Practice.” Journal of Mathematics Teacher Education 13 (2): 155–176. doi:10.1007/s10857-009- 9130-3