Assessing Beyond Minimal Compliance

Oct. 01, 2014

Source: Action in Teacher Education, Volume 36, Issue 5-6, p. 351–362, 2014
(Reviewed by the Portal Team)

The purpose of this case study was to investigate the impact of using an electronic assessment systems (EAS) beyond meeting minimal teacher education program compliance obligations.
Specifically, this study sought to determine a mechanism by which an Accreditation of Educator Preparation Providers (EPPs) could ensure the continuous review of activities critical to educational quality, (e.g., curriculum coordination, student assessment, program review, and teaching improvement).

This case study was conducted at a large, public university utilizing an EAS in their educator preparation unit.
Course evaluation forms, program evaluation forms, unit assessment summaries, committee minutes, and the experiences of the staff of the ADTE were examined to describe and analyze the process of developing a framework for the systematic review of data.
Course and program data were collected through the EAS for faculty review.


Many of the challenges the authors have encountered while implementing this yearly evaluation cycle fall into three categories: data, analysis, and ownership.

The EAS made data reports more accessible to program stakeholders and available for use in program improvement.
However, the use of the EAS produced a different set of problems that must be addressed.
Data reporting is more complicated with the EAS.
The authors now have a large volume of data that can be aggregated and disaggregated in an almost infinite number of ways.
It is difficult to create data reports that meet each faculty members’ analytical needs—what one considers vital disaggregation is overwhelming for another.

There is variation in the depth and rigor of course and program evaluations that we have collected.
Some faculty merely observe that results are satisfactory, other faculty constantly probe deeper into the available data.
A related issue is that the majority of data used to complete this program evaluation are quantitative.
Performance measures and data representations of program performance may not necessarily be as indicative of the quality of the program as holistic, overarching understandings.

In this college, faculty are often responsible for multiple programs or courses.
Further, these programs contain some courses that do not have associated lead instructors. Lack of leadership always causes problems, and, in this instance, it means that they have assessment data that may not be carefully scrutinized and programs that may not be meaningfully reviewed on a yearly basis.


Based on their experiences, the authors offer the following framework for the systematic, continuous review of assessment data, a Yearly Program Evaluation Cycle.
It is important for program faculty to be involved in every step of the assessment and accreditation process.
Having a systematic assessment model helps build agreement among faculty and administration regarding assessment planning, analyzing results, and, then later, implementing the changes based on this analysis.
Accreditation efforts, as well as other reporting processes, will be facilitated because of the evaluations conducted via the following procedures:

Prepare: Program faculty must define learning goals specific to their individual program and create formal or informal assessment tools that assess these goals.
Conduct: During each academic year, data must be collected using these assessment tools.
Evaluate: Throughout the process, all of the data from program assessments should be gathered by the program.
Close the Loop: These changes must be implemented, and the assessment cycle repeated. Subsequent evaluations will help determine if the implemented changes were successful.


The adoption of the EAS in anticipation of an National Council for Accreditation of Teacher Education (NCATE) visit marked a change in the culture of the College of Education at The University.
The College of Education has been perfecting a yearly program evaluation framework that simultaneously facilitates review of the data gathered through the EAS, is flexible enough to work for all programs, incorporates evidence from multiple sources, and takes full advantage of faculty expertise.

Updated: May. 11, 2015