Source: Journal of Teacher Education, 61(4): 302-312. (September/October 2010).
(Reviewed by the Portal Team)
The purpose of the two studies reported in this article was to measure the effects of a computerized professional development program for a concept teaching routine.
This investigation measures these effects in comparison to facetoface instruction relative to all four levels of Kirkpatrick’s (2006) evaluation model plus student satisfaction results.
For each study, teachers were randomly assigned to either a virtual workshop group that used a multimedia software program for PD or an actual workshop group that participated in a live PD session.
Study 1 addressed the research question: How do the effects of a virtual workshop (a multimedia software program) and an actual workshop (a live, facetoface program) compare with regard to teacher knowledge of the intervention (learning), (teacher skill in preparing to use the intervention (learning), and teacher satisfaction (teacher reaction) with the professional development received.
Study 2 addressed the research question, how do the effects of a virtual workshop and an actual workshop compare with regard to teacher implementation of an instructional practice in the classroom (behavior), student learning (results), and student satisfaction with the instruction they received (student reaction).
Participants. A total of 59 certified teachers who were enrolled in a graduatelevel course on increasing access to the general education curriculum for students with disabilities volunteered the experimental or control group.
Teachers. Eight teachers volunteered to participate in this study. Four teachers were randomly selected to serve in the experimental group; the four remaining teachers served in the control group.
Students. A total of 125 students participated in the study. All of these students were enrolled in one of the eight teachers’ classes at three middle schools in a large Midwestern city.
Study 1, which focused on Levels 1 and 2 (learning and reaction), showed that the teachers’ scores on the knowledge and Concept Diagram tests significantly improved following participation in either workshop.
With regard to Study 2, the implementation results suggest that teachers in both groups performed a substantially greater number of the targeted instructional behaviors in their classrooms after participation than before, and their post-training scores represented a high level of fidelity. Furthermore, the two groups of students were similarly satisfied with their teachers’ use of the routine. Thus, the computerized professional development program used in this study was as effective as facetoface professional development relative to Levels 3 and 4 of Kirkpatrick’s model (behavior and results) plus a new factor: student satisfaction.
Regarding both teacher reaction and learning, the results of Study 1 indicate that teachers express positive reactions to computerized professional development programs.
Regarding teacher learning, results from Study 1 show that teacher knowledge of instructional methods and ability to prepare for instruction improved following computerized professional development. Also, results of Study 1 show that learning outcomes were similar for teachers who participated in facetoface professional development and those who participated in computerized professional development.
Regarding teacher behavior, student learning, and student reaction, the results of Study 2 demonstrated that computerized programs not only can change teacher behavior but also can change teacher behavior in a way that results in improved student outcomes plus student satisfaction.
In summary, in light of Kirkpatrick’s evaluation model, the current research studies demonstrate that the field has the capacity to engineer software programs that have the power to provide effective professional development to teachers. These two studies demonstrate that computerized professional development programs can be designed in ways that teachers gain a great deal of knowledge (i.e., learning) about an instructional practice and express high levels of satisfaction (i.e., reaction) with what they have learned and how they have learned it. More importantly, such programs have the power to change teacher classroom practice (i.e., behavior) in ways that significantly improve student learning (i.e., results) and that are acceptable to students.
Reference
Kirkpatrick, D. (2006). Evaluating training programs: The four levels (3rd ed.). San Francisco, CA: BerrettKoehler