Section archive - Assessment & Evaluation
Page 1/18 172 items
In many teacher preparation programs at institutions of higher education in the United States, pre-service teachers receive mentoring and constructive feedback during their internship placement from an experienced supervising teacher and a university coordinator. Often the feedback loop is closed by asking interns, ‘was this useful?’ To better answer this question, researchers employed a phenomenological approach to collect interview and focus group data on the lived experiences, perceptions, and understandings of six pre-service special education interns during their internship experience. Emerging themes included satisfaction with level of support provided by supervisory stakeholders, a feeling of isolation from peer support, special consideration of the components within the evaluation tool, and a request to be provided with additional background information for their assigned university coordinators. These lessons were aggregated, presented, and then integrated into the experiences provided to forthcoming pre-service educators.
Updated: Sep. 21, 2020
“I Didn’t Want to Make Them Feel Wrong in Any Way”: Preservice Teachers Craft Digital Feedback on Sociopolitical Perspectives in Student Texts
This qualitative multicase analysis investigated the role of “educational niceness” and “neutrality” in preservice English teacher feedback on sociopolitical issues in student writing. As part of the field experiences for several English Language Arts (ELA) methods courses at two universities, one urban and one rural, the teacher-researchers used Google Docs and other technologies to connect preservice teachers (PSTs) with high school writers at a geographical distance so that urban-situated PSTs could mentor rural-situated writers and vice versa. Five methods courses over two semesters served as cases, and 12 PSTs from those courses participated in focus groups. Data included audio recordings of nine focus groups and PSTs’ digital responses to student writing. Using thematic analysis, the authors explored how PSTs responded to sociopolitical perspectives in students’ writing — both engaging them and staying neutral. Although authentic opportunities for responding to student writers supported PSTs’ critical reflection on teaching writing, analysis of PSTs’ responses indicate that such authentic practice may not be sufficient for preparing PSTs to navigate sociopolitical issues and may, in fact, exacerbate PSTs’ impulse to enact educational niceness.
Updated: Apr. 18, 2020
This sequential explanatory mixed-methods study examines the impact of analytic rubric use in peer feedback on preservice teachers’ ability to recognize indicators of best practice for second language lesson planning and lesson delivery. 53 preservice teachers in a university-level, semester-long Teaching English to Speakers of Other Languages (TESOL) practicum course received direct instruction on indicators presented in the analytic rubrics. They were then randomly assigned to control and experimental groups. The experimental group used rubrics with the indicators during peer feedback tasks, while the control group used a modified rubric without the indicators. The result from an independent samples t-test on posttest mean scores indicated a significant difference between groups for both lesson planning and lesson delivery, favoring the experimental group. Qualitative data were also collected via written comments on the posttests and from focus-group interviews. From thematic analyses of qualitative data, three key themes emerged, including specific tensions that resulted from the type of feedback preservice teachers desired and the type of feedback they were willing to give to their peers. These findings provide further insight into the use of analytic rubrics in peer feedback practices in second language teacher education (SLTE).
Updated: Jan. 29, 2020
Evaluating Special Education Teachers’ Classroom Performance: Rater Reliability at the Rubric Item Level
In this study, 19 special education teachers in California and Idaho each contributed three video-recorded classroom lessons. Using rubric items designed to reflect efficacious instructional practices for teaching students with disabilities, school administrators and peers scored the teachers’ lessons. Rater reliability and sources of error variance were examined using generalizability theory. The authors found that peers were more reliable raters than school administrators, who did not have expertise in special education, and the school administrators’ ratings varied at the rubric items level. Implications for classroom observation systems are discussed by the authors.
Updated: Jul. 25, 2019
This study examines ten preservice teachers’ use of Freiberg’s Person-Centered Learning Assessment (PCLA), a self-assessment measure. The PCLA serves as an individualized resource for educators to assess their classroom teaching and learning particularly in the affective domain. Study findings indicate that the 10 student teachers identified future pedagogical changes as a result of utilizing the PCLA, with eight student teachers specifically identifying changes in their classrooms prior to completion of the study. As explored in this study, self-assessments seem to provide novice educators with a unique form of feedback and have the potential to lead to deeper levels of pedagogical self-reflection and resulting changes.
Updated: Jun. 05, 2019
Measuring Teaching Quality of Secondary Mathematics and Science Residents: A Classroom Observation Framework
The authors report on the development of two observation rubrics—secondary math and science—that embody the aims and values of their teacher education program, specifically, equity and humanizing pedagogy, and the results of their examination of the reliability of ratings of teaching practice generated using these rubrics. They discuss the various sources of measurement error and the implications for further developing and using the observation rubric in their program.
Updated: Jun. 02, 2019
Linking Student Achievement to Teacher Preparation: Emergent Challenges in Implementing Value Added Assessment
The authors describe challenges that were confronted around the deployment of Louisiana’s value-added assessment of teacher preparation programs. Their discussion is organized around the challenges emerging from calculation, communication, and change. The discussion provides information that policy makers and teacher education leaders, rather than analysts, might find useful, and focuses on the types of challenges that a state or university system can expect to encounter in developing a value-added assessment. They describe decisions made in response to specific challenges that appear to have been successful and some that in retrospect appear to have been mistakes.
Updated: May. 29, 2019
Supporting University Content Specialists in Providing Effective Professional Development: The Educative Role of Evaluation
This study examines formative evaluation recommendations that the authors made to four different professional development (PD) projects over three years. The results of this study show that formative feedback can impact PD design and implementation. The results of this study suggest that evaluation efforts can take on a new purpose – the PD of professional developers. The authors argue that as evaluators, they interpreted what they know about PD from the research and acted as conduits of empirical findings to the PD project teams. Hence, their recommendations reflected their own knowledge and beliefs about PD, which, as active teacher education researchers, were well rooted in the PD research literature.
Updated: Oct. 11, 2018
The present study describes an assessment technique, named Assessment360, which can be implemented during coursework to prepare future teachers to be reflective practitioners. The study explores students’ perceptions of Assessment360. The findings suggested that students indicated Assessment360 potentially encouraged reflection, collaboration, and feedback.
Updated: Jul. 05, 2018
Measuring Preservice Teacher Self-Efficacy in Music and Visual Arts: Validation of an Amended Science Teacher Efficacy Belief Instrument
This study aimed to adapt the well-established Science Teaching Efficacy Belief Instrument- B (STEBI-B) for preservice teachers and to pilot the new instrument to determine its validity and reliability in The Arts. The authors argue that this study offers new contributions to the field of educational measurement in The Arts, specifically in measuring primary preservice teacher self-efficacy for learning areas like music and visual arts. The findings reveal that Arts Teaching Efficacy Belief Instrument (ATEBI) had good internal consistency and re-test reliability on the personal teaching efficacy scale. Furthermore, it was found that ATEBI had good validity statistics using ANOVAs on all scales.
Updated: May. 01, 2018