Source: Journal of Second Language Teaching & Research, 7:1, 58-82
(Reviewed by the Portal Team)
The research presented in this article investigated the impact of analytic rubrics on peer feedback tasks in a Teaching English to Speakers of Other (TESOL) practicum. Practicum students participated in microteaching demonstrations as both teachers and students and reflected on their experiences in guided discussions with peers and teachers.
The researchers hypothesized that rubric use would in fact improve preservice teachers’ ability to identify the indicators of best practices for second language (L2) lesson planning and lesson delivery that were the focus of instruction in the course.
They also supposed that thematic analysis of focus group interviews with participants and of written comments would yield depth and richness to the explanation of quantitative results.
Based on the potential merit of using analytic rubrics during peer feedback tasks, as well as the relative shortage of studies that address this practice in second language teacher education (SLTE), the following research questions guided the current study:
1. To what extent does analytic rubric use during peer feedback tasks impact preservice teachers’ ability to recognize indicators of best practice associated with L2 lesson planning?
2. To what extent does analytic rubric use during peer feedback tasks impact preservice teachers’ ability to recognize indicators of best practice associated with L2 lesson delivery?
3. What are preservice teachers’ beliefs, attitudes, and perceptions about the usefulness of utilizing rubrics during peer feedback tasks for their own development as teachers?
Method
To answer Research Questions 1 and 2, the authors employed an experimental research design to determine the extent to which analytic rubric use during peer feedback tasks impacted preservice teachers’ abilities to recognize indicators of best practice associated with L2 lesson planning and delivery.
The indicators were selected for the rubric because they were the focus of the direct instruction during the first four weeks of the course.
To answer Research Question 3, they transcribed and analyzed data from the focus group.
Participants - The participants were 53 preservice teachers in a semester long L2 teaching practicum course at a large research university in the United States.
Course - The TESOL practicum course provided students with opportunities to:
(1) observe teachers in classrooms with English learners (ELs),
(2) plan lessons for ELs or for courses that include ELs,
(3) deliver lessons in short teaching demonstrations to the practicum instructor and their peers (hereafter referred to as microteaching demonstrations),
(4) receive constructive feedback on microteaching demonstrations and lesson plans,
(5) receive instruction on rubric use, and
(6) participate in guided discussions about teaching. The course met 180 minutes weekly for 16 consecutive weeks.
Procedures - All participants received four weeks (180 minutes a week) of direct instruction on lesson planning and delivery, which specifically focused on the constructs and indicators presented in the rubrics.
Modeling was an important feature during the direct instruction phase.
In addition, preservice teachers critiqued videotaped teaching demonstrations using the rubrics, reviewed example lesson plans, and participated in guided discussions with their peers. After four weeks of direct instruction, participants were randomly assigned to either an experimental group (n = 26) or a control group (n = 27).
Each group met for 90 minutes for 11 weeks.
Each student developed two lesson plans and presented a portion of each lesson to peers in two microteaching demonstrations.
In the experimental group, participants used rubrics with the indicators during peer feedback tasks, and they were asked to assign a numeric value to the performance on each indicator.
They were also encouraged to write comments on the rubric in addition to assigning a numeric value.
In the control group participants used a modified rubric without the indicators and wrote comments to their peers based on the constructs.
Participants in both groups also provided feedback on drafts of the lesson plans, with the experimental group using the rubrics with the indicators and the control group using the modified rubrics with only the constructs.
Participants in both groups had access to the rubrics during the creation of lesson plans and the preparation of their microteaching demonstrations.
At the end of each class period, participants engaged in guided discussions.
The discussions were integral to the development of a reflective practice and were intended to build a climate of trust and support among peers in which they could talk openly about teaching and share their successes, challenges, and concerns.
After the guided discussions the rubrics were given to peers who had presented so they could benefit from both oral and written feedback.
The posttest for lesson planning - The posttest for lesson planning was given at the end of 16 weeks of instruction.
The posttest for lesson delivery - The posttest for lesson delivery was also given at the end of 16 weeks of instruction.
Focus groups - To obtain qualitative data about preservice teachers’ beliefs and perceptions about the usefulness of peer feedback for their own development as teachers, a subset of participants were asked to meet with at least one of the researchers in small focus groups at the end of the 16 weeks.
Results and discussion
The authors’ first and second research questions explored the extent to which analytic rubric use during peer feedback tasks impacted preservice teachers’ ability to recognize indicators of best practice associated with L2 lesson planning and delivery.
The quantitative data analyses yielded confirmatory results about the use of analytic rubrics during peer feedback in a TESOL practicum.
As a result of rubric use during the peer feedback process for both lesson planning and delivery, participants in the experimental group were able to recognize significantly more indicators of best practice than the control group.
While these results are encouraging, it is also important to consider the results in context.
It seems that the use of analytic rubrics made the indicators more transparent for the experimental group; nevertheless, participants were only able to identify about half of the indicators they had been working with during direct instruction and through the peer observations.
As teacher educators, the authors were initially disappointed with this result until they asked themselves whether the expectations, they had for the development of preservice teachers’ skills were overly ambitious.
The quantitative data confirm that preservice teachers need more time and experience observing and participating in teaching to develop their skills in recognizing indicators of best practice, and, certainly, it is reasonable to conclude that teachers will not learn everything they need from their initial education.
Further studies should analyze experienced teachers’ skills in identifying indicators of best practice to determine a reasonable trajectory for the development of preservice teachers’ skills.
As teacher educators, the authors are also cognizant of the fact that preservice teachers’ abilities to identify indicators of best practice as a result of the peer feedback process may also be dependent on the practicum instructor because it is this individual who is responsible for the initial modeling of the targeted skills and demonstrating explicitly how to apply them in particular contexts (Pleogh, Tilema & Segers, 2009).
The results of the current study support studies in the existing literature that the use of rubrics promotes learning by making performance criteria explicit, as seen in Hack (2015) and Jonsson and Svingby (2007).
While rubrics may make learning goals transparent, that benefit alone might not be sufficient to consider them universally effective across contexts (Wöllenschläger et al, 2016).
The authors also wanted to examine the usefulness of rubrics beyond the benefit of the transparency.
The authors’ third research question regarded preservice teachers’ beliefs, attitudes, and perceptions about the usefulness of utilizing rubrics during peer feedback tasks for their own development as teachers.
Qualitative data analysis provided additional insights into how the use of analytic rubrics in peer feedback tasks influenced preservice teachers’ perceptions.
The authors report that the themes derived from the qualitative data indicate that preservice teachers demonstrated a positive orientation towards the collaborative teacher practices that were embedded in the TESOL practicum—peer observations, guided discussions about teaching, and the use of analytic rubrics in the peer feedback process.
Data analyses also revealed that there were tensions inherent in the process of providing peer feedback, such as the fact that preservice teachers were reluctant to give constructive or critical feedback to their peers, while at the same time they wanted this type of feedback from their peers.
The authors suggest that to resolve this tension, teacher educators must work with preservice teachers to involve them in discussions about teacher development and the rationale behind the use of peer feedback, as well as provide modeling for the practices in which they want preservice teachers to participate (Brew, 2009).
References
Brew, A. (2009). Academic research and researchers. London, England: McGraw-Hill Education (UK)
Hack, C. (2015). Analytical rubrics in higher education: A repository of empirical data. British Journal of Educational Technology, 46(5), 924-927. doi: 10.1111/bjet.12304
Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130-144. doi:10.1016/j.edurev.2007.05.002
Ploegh, K., Tillema, H. H., & Segers, M. S. (2009). In search of quality criteria in peer assessment practices. Studies in Educational Evaluation, 35(2), 102-109
Wollenschläger, M., Hattie, J., Machts, N., Möller, J., & Harms, U. (2016). What makes rubrics effective in teacher-feedback? Transparency of learning goals is not enough. Contemporary Educational Psychology, 44-45, 1-11. doi:10.1016/j.cedpsych.2015.11.003