Source: Journal of Teacher Education, Volume: 72 issue: 1, page(s): 42-55
(Reviewed by the Portal Team)
This study examines various constituencies within teacher preparation programs (TPPs) (i.e., administrators, faculty, staff, and candidates) and across TPPs.
Equally, it examines how associated state policy designs and organizational factors play a role in this process.
The authors use a multiple-embedded case study (Yin, 2013).
Two different states were selected—Illinois and Iowa—along with four TPPs in each state.
They draw upon interviews (n = 69), focus groups (n = 6), and documents to examine how these various constituencies sense-make edTPA.
In particular, they focus on the intersection between policy design and local program context.
They then analyze how this intersection promotes or constrains edTPA as a tool for inquiry or compliance.
Their research question is as follows:
Question 1: How are different stakeholders (i.e., administrators, faculty, staff, and teacher candidates) perceiving and responding to edTPA within and across TPPs—as a tool for inquiry or compliance?
Question 1a: How do associated state policy designs influence this process?
Question 1b: What are the major organizational factors similarly playing a role?
Method
Design
To allow for rigorous examination across policy levels, the authors used a multiple-embedded case design (Yin, 2013).
They first selected two case states—Illinois and Iowa.
These states represented the two policy designs associated with edTPA (i.e., coercive vs. voluntary).
Illinois mandates edTPA for all teacher candidates whereas Iowa gives TPPs the option to adopt edTPA for licensure (among others).
Embedded Cases
To examine edTPA within and across TPPs, four embedded cases were selected within each state.
These TPPs were purposefully chosen to reflect a public, private, urban, rural, homogeneous/heterogeneous population, and alternative case in each state.
Data Sources and Procedures
Within these embedded cases, the authors identified TPP administrators, faculty, and staff in one of two ways: institutional website biographies or snowball sampling.
Over three years, interviews were conducted by the first author using a semi-structured protocol (n = 63).
They purposely interviewed at least one administrator, faculty, and staff member to trace edTPA sensemaking across organizational strata.
A minimum of five and a maximum of 15 were interviewed at each embedded TPP.
They also interviewed faculty across Illinois and Iowa TPPs outside selected cases (n = 6; three in each state), providing an additional layer of comparison.
Overall, participation rates were about 85%.
To highlight the voices of teacher candidates, focus groups (n = 6) were conducted by the first author halfway through candidates’ student teaching seminar course—the same semester they complete edTPA.
This gave the opportunity to gauge their perceptions of edTPA during completion and how respective TPPs may have played a role.
A semi-structured protocol served as a guide.
Finally, the authors collected emails, letters, agendas, minutes, progress reports, and internal records at each embedded TPP.
These documents were either provided by interviewees or found via institutional websites, helping to assess policy tools, fiscal resources, faculty/staff support, and candidate expertise.
They also served as a member check for prior interviews and focus groups conducted by the first author.
Data Analysis and Procedures
All data were transcribed and uploaded to ATLAS.ti, a computer software package.
Data analysis was conducted by the first author.
He used the constant comparative method (Glaser & Strauss, 1967) and coding suggestions of Saldaña (2016).
Findings and discussion
In tracing stakeholders’ (i.e., TPP administrators, faculty, staff, and candidates) diverse sensemaking of edTPA, the author’s data suggest that policy design influenced them to either view edTPA as an inquiry-based or compliance-based tool.
Still, organizational factors further bifurcated this view, particularly due to variations in institutional leadership (i.e., top/ middle), mission (i.e., behaviors/values), identity (e.g., student demographics/programmatic structure), fiscal resources, coupling, routines, and tools.
Furthermore, individual views of edTPA as a tool for measuring teacher effectiveness became important.
Toward these ends, they find complexities in the policy sensemaking process have produced many perceived promises and pitfalls for edTPA as a national teaching assessment and support system—what we term the “good,” the “bad,” and the “ugly.”
This section will unpack each of these deductive terms, providing a holistic picture of edTPA policy sensemaking and subsequent implementation.
The Good
If edTPA is a policy lever designed to build capacity and improve teacher education (i.e., inquiry), then a measurement of its “goodness” (or effectiveness) would be how it has had a perceived positive impact on TPPs.
The authors’ data suggest that despite different policy designs, many stakeholders across Illinois and Iowa perceive edTPA as having a positive influence on their programs and teacher education generally.
This is particularly true when TPPs have provided the necessary capacities for implementation (i.e., active use).
Four key promises were found.
First, they find edTPA is fostering cross-communication/ collaboration across individual programs and between universities more broadly.
In other words, TPPs are making their practice public in an effort to improve implementation across contexts.
Second, they find edTPA is institutionalizing continuous improvement/reflection across TPPs.
In particular, TPPs are using score data to make informed decisions, redesigning their programs around inputs, and seeking accreditation from nongovernmental organizations (NGOs) like CAEP.
Third, edTPA is providing a systematic foundation around what the field deems to be important.
This includes academic language, teaching pedagogy, differentiation, and assessment.
Fourth, they find edTPA is cultivating external legitimacy—something the profession has often lacked (Cohen et al., 2020).
On one hand, it is helping to standardize what good teaching is within programs and across states.
On the other hand, multiple interviewees commented that school leaders are impressed by the demonstrable evidence candiates those taking edTPA display.
The Bad
Similar to “goodness,” edTPA’s “badness” is a measure of how it has had a perceived negative impact on TPPs.
Toward these ends, the authors found incompatibility between policy design and/or organizational factors resulted in two sets of issues:
(a) implementation challenges and
(b) philosophical challenges.
We discuss each in turn.
Concerning implementation challenges, two kinds were found:
(a) taking time away from important aspects of the program (i.e., narrowing curriculum) and
(b) student teaching.
These challenges were less connected to policy design and more so connected to a TPP’s capacities and will to support edTPA.
Besides teacher candidate stress (commonly discussed across the literature), TPPs dealt with contentious university-district partnerships in both states.
These included principals and cooperating teachers unwilling to support edTPA candidates, as well as TPPs no longer partnering with some districts.
Concerning philosophical challenges, three kinds were found:
(a) loss of internal control,
(b) privatization of teacher education, and
(c) limitations on social justice.
Unlike implementation challenges, these challenges were a byproduct of both policy design and how TPPs perceived candidates should be prepared and measured.
For those who actively resisted edTPA, the assessment became a de-professionalization tool.
These individuals described a loss of internal control.
The authors find there may be a mismatch between social justice, urban TPPs, and the normative values of edTPA.
Although the assessment does support such pedagogy (Sato, 2014), it is not emphasized to the extent some programs feel is required.
In this way, a philosophical conundrum may be a core reason behind edTPA’s active resistance, regardless of what design TPPs operate under
Overall, these implementation and philosophical challenges illustrate that some TPP behaviors and values may not align with edTPA.
Jefferson serves as a cautionary tale.
The authors found limited capacities constrained their behaviors to adequately meet the assessment and associated mandate.
Equally, compatibility issues resulted from opposing value structures.
These challenges are exacerbated under a coercive design because TPPs cannot withdraw.
The Ugly
Collectively, these diverse perspectives illustrate an “ugly” situation for edTPA and teacher education.
The authors find those who firmly believe edTPA is professionalizing the field, affording a framework for program improvement/redesign.
However, they also find those who firmly believe edTPA is deprofessionalizing the field, providing an additional accountability mechanism for programs and candidates alike.
So while edTPA has become “the first nationally available, educator-designed support and assessment system for teachers entering the profession” (SCALE, 2015, p. 4), it has also illustrated a schism.
The authors argue this schism is related to TPP capacities and will to meet these varying edTPA policies.
It is also related to enduring differences regarding what “good” teaching is and how best to measure it.
Seen from this view, when TPPs believe edTPA does align with their organizational behaviors and values, their capacities and will to meet its associated policies are high.
edTPA thus becomes a tool for inquiry, just as its developers intended.
Alternatively, when TPPs do not believe their organizational behaviors and values align, their capacities and will to meet edTPA associated policies are low.
edTPA thus becomes a tool for compliance, undermining what good outcomes the assessment system affords.
Making the “goodness” of edTPA work across multiple contexts with varying sensemaking subsequently becomes difficult.
Conclusion
Within a policy environment that seeks greater rigor and accountability, edTPA offers legitimacy, accountability, and professionalism.
Notwithstanding, if edTPA is to succeed across states and TPPs, the complexities of both policy design and local implementation processes need to be further considered.
To this end, this study provides an alternative answer to Richmond et al. (2019) question of “how do we develop assessments that are informative, scalable, and accepted by the majority of experts in the field?” (p. 86).
Future research should continue to examine how policy design and local complexities impact edTPA implementation, and which policy is most effective in having a positive impact on the field.
References
Cohen, J., Hutt, E., Berlin, R. L., Mathews, H. M., McGraw, J. P., & Gottlieb, J. (2020). Sense making and professional identity in the implementation of edTPA. Journal of Teacher Education, 71(1), 9–23.
Richmond, G., Salazar, M. D. C., & Jones, N. (2019). Assessment and the future of teacher education. Journal of Teacher Education, 70, 86–89.
Saldaña, J. (2016). The coding manual for qualitative researchers: 3E. Thousand Oaks, CA: SAGE.a
Sato, M. (2014). What is the underlying conception of teaching of the edTPA? Journal of Teacher Education, 65(5), 421–434.
Stanford Center for Assessment, Learning and Equity. (2015). Educative assessment & meaningful support: 2014 edTPA administrative report. https://secure.aacte.org/apps/rl/res_get. php?fid=2183
Yin, R. K. (2013). Case study research: Design and methods. SAGE.