Do Student Achievement Outcomes Differ Across Teacher Preparation Programs? An Analysis of Teacher Education in Louisiana

Dec. 01, 2012

Source: Journal of Teacher Education, 63(5), December 2012, p. 304-317.
(Reviewed by the Portal Team)

This study describes the output of 1 year’s analyses of a systematic approach to examining student achievement outcomes for recent program completers across teacher preparation programs (TPPs) in Louisiana.

Achievement outcomes for students taught by recent program completers of Louisiana’s teacher preparation programs (TPPs) are examined using hierarchical linear modeling of State student achievement data in English language arts, reading, mathematics, science, and social studies.
The current year’s achievement in each content area is predicted using previous achievement data, student characteristics, classroom characteristics (e.g., percentage of students with disabilities), school characteristics, and attendance of teachers and students.
The contribution of a teacher having recently completed a specific TPP is modeled at the classroom level as an indicator variable for each TPP.


Results demonstrated considerable overlap in CI between programs, with some programs having coefficients whose CI did not overlap with substantive anchors such as the average new teacher or the average experienced certified teacher in that content domain with either a 68% or a 95% CI.
One of the notable gaps in the literature is the extent to which teacher preparation matters.
The absence of a compelling literature base regarding variability in teacher preparation may have more to do with the complexity of studying it than with a lack of interest in the topic on the part of researchers.One of the critical challenges that this sort of data create is arriving at reasoned and reasonable decision-making rules for a context in which measures are repeated annually, potentially provide information for program improvement, and include the population of new teachers from a program in tested grades and subjects. This is a decision-making context that diverges from the typical hypothesis testing research context in a number of important ways.
The availability of data that permits study based on large representative administrative databases is a relatively recent phenomenon.
Value-added analyses have emerged with dramatically increasing popularity in education over the last two decades as the methods have become more widely understood, computational resources have become more readily available, and the necessary data have become more available.
They have used previous achievement, student demographic variables, and contextual variables to estimate the contribution of a range of educational inputs, including individual teachers in its most controversial application.
This study illustrates a method for extending this type of analysis to the study of teacher preparation.
Although the data presented here do not contain enough information on enough different programs within each pathway to answer that question, they do provide clear cautionary data. Examination of the differences between Private Provider Practitioner Programs 1 and 2 suggests the possibility of considerable variability within pathways.
A final possibility is interest in features within preparation programs.
The challenges underlying this sort of contrast are particularly daunting given the measurement challenges surrounding program features and obtaining sufficient controls to isolate the impact of individual program features.

These data also illustrate a potential process for providing TPPs feedback on the achievement of students taught by new graduates.
These types of data provide one key element of continuous improvement models, the ability to obtain repeated measurements of a relevant meaningful outcome of interest.
The authors argue that if sufficient controls are in place that policy makers and teacher educators trust is a fair evaluation and results are stable or have clear trends over time, they provide the potential for a substantive new tool in program improvement by providing feedback in a domain in which it has not been available in the past.

The Louisiana Board of Regents (BoR) is using value-added analysis to provide TPPs with information on how students taught by recent program completers are fairing on standardized achievement tests.
This has occasioned some programs to examine their programs in domains for which they were dissatisfied with their results either relative to their performance in other content areas or relative to the state to identify potential points of program improvement.
As with all value-added data, the results do not answer why a particular result occurred or what might be done to improve on it; rather, all it does is provide feedback on performance, focus program improvement efforts.

Updated: Dec. 21, 2015