How do pre-service teacher education students respond to assessment feedback?

November 2020

Source: Assessment & Evaluation in Higher Education, 45:7, 913-925

(Reviewed by the Portal Team)

This project sought to investigate the feedback given to students on a specific assessment task in a postgraduate initial teacher preparation course at a small regional university in Australia.
In Australia, the Course Experience Questionnaire, designed by Krause et al. (2009), reported that graduates are less satisfied with assessment and feedback than all other features of their courses.
Hence, the author investigated ways in which assessors can promote student acquisition of knowledge of assessment practices, such as giving feedback based on evidence from students themselves, gathered via survey.
As a pre-service teacher educator, coordinating assessment courses, the author was interested in the impact of his instruction on pre-service teacher assessment literacies.
As this cohort was a group of students training to be assessors, the author speculated that there might be a heightened engagement with the feedback process, given the importance of this aspect of assessment in their future careers upon graduation.
The research questions in this study were:
• How do students perceive the assessment feedback they receive?
• Are pre-service teacher education students motivated to engage with feedback?
• Are pre-service teacher education students open to new feedback mechanisms?

The data gathering method adopted in this study were that used by Ferguson (2011), specifically a survey to measure student reactions to a feedback process.
The context for this study is a small regional university.
The pre-service teacher education programme targeted was a one-year Graduate Diploma in Education.
The course targeted was a course focused on learning theories.
The task itself was for students to evaluate various learning theories.
Assessment feedback on a single task was provided to students in the form of electronic comments via the university Learning Management System (LMS).
Although a criteria sheet was written for the task, disseminated to students and discussed well before the task was due, the criteria sheet itself, with marker annotations, was not returned to students.
Students received a numerical score for the task (a university requirement) via the LMS and an on-balance judgement of quality in terms of the standard achieved (e.g. high distinction, distinction, credit, pass, fail).
Rather than correcting errors via written annotations on student scripts, errors in the submission from students were identified by way of a question back to the student asking students to identify what was wrong with the ‘offending’ text.
This question was typically: ‘What is wrong with this statement?’
The questions referred to both academic literacy-related errors (e.g. in text citations, grammar, spelling) and errors in content.
Students were asked to make an appointment to discuss the feedback. It was hoped that these methods would reveal students’ dispositions towards engagement with feedback and would encourage them to pay attention to the criteria sheet which was used to make assessment judgements.
In order to ascertain how students reacted to this process, a survey was developed and administered to two tutorial groups, numbering 40 Diploma of Education postgraduate pre-service teacher students, representing approximately half of the total cohort. Immediately after the results had been released, students were invited to complete an anonymous survey. The survey consisted of questions around feedback in the form of Likert scale responses to set questions and a free text section.
The responses were analysed using simple descriptive statistics, and recurring themes were identified from the free text responses.
These themes were confirmed by an independent colleague.

Results and discussion
The results of this study reveal that there is a great deal of confusion and contradiction amongst a single cohort of students.
Results indicated a large number of responses to the feedback intervention made reference to not receiving a criteria sheet as part of feedback and not liking this process.
Criteria sheets were considered useful for identifying areas for improvement, showing a link between what was expected and what the student achieved, and identifying precisely the areas where marks were deducted.
A major theme was that students approve of a process that combines both written feedback and a criteria sheet.
On the other hand, there was a significant number of students who considered current feedback processes inconsequential or of limited value and valued the process of having to think about their errors, without being necessarily told.
Students identified that they like to receive written feedback as they consider this very helpful and consistent.
This result is in keeping with Ferguson’s study (2011), which identified students’ most preferred feedback option as written, timely, personal, constructive and positive.
Although some students identified written feedback a key part of the overall feedback system, most students stated that the most effective feedback was when written comments and criteria sheets were used in combination.
This is in keeping with the recommendation made by Dowden et al. (2013, 349), ‘in order to accommodate students’ emotional responses, effective written feedback should be aligned with pedagogies which specifically include the development of rich dialogue within the teaching and learning context’.
Meeting the assessor helped to clarify the written feedback and where improvements to the process can be made.
This is in keeping with recent research emphasising the importance of dialogue between student and assessor (Crisp 2007; Nicol 2010; Ferguson 2011; Blair and McGinty 2013), and with Nicol and Macfarlane Dick’s (2006, 203) effective feedback framework, specifically the recommendation that feedback should be a collaborative process that ‘encourages teacher and peer dialogue around learning’.
Many comments were made by some students specifically about ‘offending’ text.
All the comments agreed that it was difficult to identify ‘offending’ text.
Students believed if they could identify ‘offending’ text they would not have written it in their assessment piece.
One student suggested that it may be more helpful to provide the written feedback to students prior to asking them to identify offending text.
The written feedback could be used as a prompt.
One student wrote ‘I like to be told what I did wrong’ and another wrote ‘I don’t like having to guess what you want or what I did wrong’.
The author was interested in testing a novel and unconventional feedback-related intervention that placed the student at the centre of the feedback process.
Despite the range of responses to this particular feedback process, two things were clear:
(i) the student is in the best position to judge the effectiveness of feedback; and
(ii) a student does not always recognise or act upon the benefits feedback provides.
Similarly, Nicol (2009, 337) contends that ‘students should be given a much more active and participative role in assessment processes’.
One of the constraints to this participation is the real or perceived power relationship between student and assessor.
Despite the literature continuing ‘to remind us that students are unhappy with feedback and that what they would really like is more verbal feedback’ (Blair and McGinty 2013, 468), very few students, just 5/40 in this study, took the face-to-face feedback option.
Despite its limitations, this study contributes to the growing knowledge of the various elements of the feedback loop, and clearly indicates that additional ways for assessors to provide feedback to their students must be developed.
The author’s results confirm that:
(i) feedback is personal and unique to each individual;
(ii) there is no single ‘best method’ for providing feedback; and
(iii) feedback processes must be customised to individual students.
For students to be able to learn from and apply the lessons of feedback, they must possess background knowledge of the subject identified by Sadler as ‘task compliance, quality and criteria, and also develop a cache of relevant tacit knowledge’ (2010, 535).
Induction into the ways of teaching, including the fundamental core business of giving feedback, is a prerequisite for teacher education in order to promote the development of tacit assessment knowledge in relation to the giving and appraisal of feedback.

Blair, A., and S. McGinty. 2013. “Feedback-dialogues: Exploring the Student Perspective.” Assessment & Evaluation in Higher Education 38 (4): 466–476.
Crisp, B. 2007. “Is it Worth the Effort? How Feedback Influences Students’ Subsequent Submission of Assessable Work.” Assessment & Evaluation in Higher Education 32: 571–581.
Dowden, T., S. Pittaway, H. Yost, and R. McCarthy. 2013. “Students’ Perceptions of Written Feedback in Teacher Education: Ideally Feedback is a Continuing Two-way Communication that Encourages Progress.” Assessment & Evaluation in Higher Education 38 (3): 349–362.
Ferguson, P. 2011. “Student Perceptions of Quality Feedback in Teacher Education.” Assessment & Evaluation in Higher Education 36 (1): 51–62.
Krause, K., R. Hartley, R. James, and C. McInnis. 2009. The First Year Experience in Australian Universities: Findings from a Decade of National Studies.
Nicol, D. 2009. “Assessment for Learner Self-regulation: Enhancing Achievement in the First Year Using Learning Technologies.” Assessment & Evaluation in Higher Education 34 (3): 335–352.
Nicol, D. 2010. “From Monologue to Dialogue: Improving Written Feedback Processes in Mass Higher Education.” Assessment & Evaluation in Higher Education 35 (5): 501–517.
Nicol, D., and D. Macfarlane Dick. 2006. “Formative Assessment and Self-regulated Learning: A Model and Seven Principles of Good Feedback Practice.” Studies in Higher Education 31 (2): 199–218.
Sadler, D. R. 2010. “Beyond Feedback: Developing Student Capability in Complex Appraisal.” Assessment & Evaluation in Higher Education 35: 535–550. 

Updated: Dec. 10, 2020