Assessing Teachers’ Science Content Knowledge: A Strategy for Assessing Depth of Understanding
Source: Journal of Science Teacher Education, Vol. 24, No. 4, (June, 2013), p. 717–743.
(Reviewed by the Portal Team)
This article reports on the evaluation of a model for assessment of content knowledge used by researchers in the Problem-Based Learning (PBL) Project for Teachers of Science.
The PBL Project for Teachers of Science was a comprehensive teacher professional development (PD) program that attempted to help teachers develop PCK for teaching science concepts in specific subject areas.
The authors describe the types of information that can be revealed by this assessment strategy used in the 4th cohort of teachers with the PBL Project for Teachers of Science.
Research Questions
The research questions that guided this research included:
1. What information about teachers’ content knowledge can this method yield?
2. What are the strengths and limitations of this assessment approach?
Methodology
The participants in the 4th cohort of the PBL Project for Teachers of Science included 78 individuals who taught science in schools across central Michigan.
The cohort included 32 teachers from elementary grades, 28 from middle schools, and 18 from high school classrooms.
The instruments include two types of open-ended questions that assess both general knowledge and the ability to apply Big Ideas related to specific science topics.
The coding scheme is useful in revealing patterns in prior knowledge and learning, and identifying ideas that are challenging or not addressed by learning activities.
These assessment strategy and scoring methodology result in scores for each teacher about the quality of their understanding of each Big Idea before and after PD.
The compiled scores indicated that there were several common ways in which teachers’ content knowledge changed during the PD:
(a) they added ideas that they had not previously used;
(b) they clarified ideas that they had previously used inaccurately or unclearly;
(c) they both added and clarified ideas, but gave confused or inaccurate accounts of some of the new ideas.
The Information about Teachers’ Content Knowledge that this Method Can Yield
The compilation of scores by teacher facilitates assessment of the strength of teachers’ incoming knowledge and changes in their knowledge both in terms of number of Big Ideas and the clarity, accuracy, and completeness of that use.
In the PBL Project, this information was used to tailor the instructional problems so that participating teachers would have enough initial knowledge to get started on the problems, but the problems would also deepen their understanding of the content.
In addition, this assessment method yields information on the quantity and quality of changes in teachers’ content knowledge pre- and post-PD that can provide data for PD improvement and accountability.
Identification of Big Ideas where few teachers show improvement can be used to determine weaknesses in the PD and/or assessment or areas where additional PD may be needed.
The Strengths and Limitations of this Assessment Approach
One of the strengths of this assessment approach is that the two types of open-ended assessment questions were authentic tasks that revealed the depth of understanding of teachers prior to the PBL learning activities.
This PD and assessment model would also apply to a school district.
While the assessment strategy offers several advantages over selected-response tests, there are limitations to the method.
One of the limitations of the content knowledge assessment strategy presented here is that, while carefully chosen based on extensive experience and knowledge of state and national objectives, the Big Ideas are somewhat idiosyncratic.
Another limitation is the time needed to code responses.
Open-ended assessments are, by their nature, more time-consuming to score than selected-response tests.
This strategy would not be appropriate for comparing content knowledge across a very large sample.
PD providers sometimes must choose between efficiency and depth of information provided by various data sources.
There may also be limits to the degree to which the assessments allow diagnosis of the source of misconceptions and gaps in understanding.
The limitations are important in making decisions about the use of the assessment strategy theas described.
The authors conclude that these findings support the use of the strategy described here as an effective method for assessing science content knowledge.
The strategy described here serves as a sample methodology for assessing content knowledge, especially when used with limited numbers of teachers.
This strategy is flexible enough to be applied to many different science topics, and reveals multiple dimensions of content knowledge that are important to effective science teaching.
Another application of this assessment strategy is to study content learning among teachers. Professional development providers need to collect evidence of changes in teachers’ ideas as part of the program evaluation process.