Using Evidence for Teacher Education Program Improvement and Accountability: An Illustrative Case of the Role of Value- Added Measures

Published: 
Nov. 01, 2012

Source: Journal of Teacher Education, 63(5), p. 318-334, 2012
(Reviewed by the Portal Team)

In this article, the authors consider what can be learned from limited forms of evidence, for purposes of accountability and program improvement.
They focus on examining whether differences in teacher value-added scores exist by type of teacher preparation institution attended and years of teacher experience.

They begin with a review of recent research on how evidence has been used to examine the effectiveness of teacher preparation and development.

Methods
Using empirical evidence from a state with limited data capacity, they illustrate what can be learned from value-added measures as one form of evidence.
Data were gathered from administrative records maintained by the Washington State Office of the Superintendent of Public Instruction (OSPI).
The authors linked several sets of data to form a database of student, teacher, and school records.
Student information included gender, race/ethnicity, grade level, participation in the Free or Reduced-Price Lunch program (FRPL), primary language spoken, participation in learning assistance programs, disability status, school and district location, and test scores derived from student assessment records.

The teacher data came from administrative data sets of personnel and certification records. For each teacher, the available data included gender, race/ethnicity, age, years of teaching experience, highest degree, recommending agency of the initial teaching certificate, and school and district location.

Discussion

This study shows that there is potential in using value-added models as an additional form of evidence that can inform our understanding of the effectiveness of teacher preparation programs in producing teachers who can positively affect student learning.
Value-added measures can assist in the identification of teachers who potentially may be more effective, as well as those who may be in need of extra assistance and support.

Analyses involving teacher preparation institutions should include a variety of program variables, including the nature and length of field placements, program characteristics and types (e.g., traditional or alternative), and methods to address issues of selectivity.

The review of the empirical research identified common elements that are important to consider in examining measures of effectiveness in teacher training and development.
Some of these include teaching experience, candidate selectivity, subject matter knowledge, and specific program features of professional preparation.
From the exercise of using a state’s existing data capacity to explore some of these elements, the authors learned that differences exist among teacher preparation institutions on some student learning outcomes, though the extent to which program features or other mitigating factors may contribute to differences is often unknown.
They argue that a richer data set can help address possible biases with existing measures.
One application of value-added measures for institutions of higher education may be found in analyzing specific programs regarding malleable factors and education outcomes that could contribute to the identification and development of potentially beneficial interventions for teacher training.
Accountability and improvement efforts are not well served by simple measures used out of convenience.
This requires a deeper examination of the variety of elements that need to be taken into account, and an acknowledgment that teacher preparation programs vary in their individual features and purposes.
Finally, consideration must be given to the appropriate use of data for particular purposes, whether accreditation, licensure, or improvement.

Conclusion

The authors concludes by arguing for collective responsibility among teacher education institutions, professional organizations, and state and local agencies as they respond to the demand for increased accountability.
The challenge for preparation programs is to do the hard work of collectively identifying and developing measures that better reflect unique program features and seek to build capacity for reliable data collection.
This will require the cooperation and agreement among programs and institutions about what elements matter, and which can and should be consistently and reliably obtained across settings.
State agencies and accreditation bodies also share in the responsibility to improve the measures often used for student and teacher learning and find more robust and comprehensive ways of assessing the effectiveness of teaching.
This suggests the need for collective responsibility on the part of teacher preparation institutions, professional organizations, state agencies, and K-12 districts and schools for the quality, availability, and appropriate uses of data for purposes of internal and external accountability.

The development and implementation of a comprehensive and coherent approach to the collection and use of evidence in the improvement of teacher preparation programs provides two additional benefits: increased transparency and joint accountability.
First, the sharing and use of evidence from multiple sources can help increase public understanding of the complexities of recruiting, preparing and supporting the next generation of teachers, and can help policy makers engage in debates that are informed by evidence.
Second, the development of a consistent base of evidence can help specify the ways in which the variety of state, regional, and local institutions involved in teacher preparation and development share accountability for ensuring that all students have access to high-quality teaching.

Updated: Dec. 16, 2015
Print
Comment

Share: