Assessment, Technology, and Change

Jul. 01, 2010

Source: Journal of Research on Technology in Education, 42(3), 309–328. (2010)
(Reviewed by the Portal Team)

Despite three decades of advances in information and communications technology (ICT) and a generation of research on cognition and new pedagogical strategies, the field of assessment has not progressed much beyond paper-and-pencil item-based tests. Research has shown these instruments are not valid measures of sophisticated intellectual performances.

In this article, the authors present a model for how technology can provide more observations about student learning than current assessments. To illustrate this approach, the authors describe their early research on using immersive technologies to develop virtual performance assessments.

The authors are using the Evidence Centered Design (ECD) framework (Mislevy, Steinberg, & Almond, 2003) to develop interactive performance assessments for measuring scientific inquiry that are more reflective of the situative, complex performances that scientists and scholars expert in inquiry learning call for students to master.

Design Framework

ECD is a comprehensive framework that contains four stages of design:
1. Phase 1: domain analysis
2. Phase 2: domain modeling
3. Phase 3: conceptual assessment framework and compilation
4. Phase 4: delivery architecture

Phases 1 and 2 focus on the purposes of the assessment, nature of knowing,
and structures for observing and organizing knowledge.

Phase 3 is related to the Assessment Triangle. In this stage, assessment designers focus on the student model (what skills are being assessed), the evidence model (what behaviors/performances elicit the knowledge and skills being assessed), and the task model (what situations elicit the behaviors/ evidence). In the compilation stage of Phase 3, tasks are created. The purpose is to develop models for schema-based task authoring and developing protocols for fitting and estimation of psychometric models.

Phase 4, the delivery architecture, focuses on the presentation and scoring of the task (Mislevy et. al. 2003, Mislevy & Haertel, 2006).


The assessments that the authors are creating will complement rather than replace existing standardized measures by assessing skills not possible via item-based paper-and-pencil tests or hands-on real-world performance assessments.

One of the advantages of developing virtual assessments is that they will alleviate the need for extensive training for administering tasks. It is difficult to standardize the administration of paper-based performance assessments, and extensive training is required to administer the tasks. With virtual assessments, we can ensure standardization by delivering instruction automatically via the technology.

A second advantage is that virtual assessments alleviate the need for providing materials and kits for hands-on tasks. Everything will be inside the virtual environment.

Third, these performance assessments will be easier to administer and will require very little, if any, training of teachers. Scoring will all be done behind the scenes; there will be no need for raters or training of raters.

Fourth, virtual assessments would alleviate safety issues and inequity due to lack of resources.

In their work in developing virtual inquiry curricula, the authors developed the ability to simulate the passing of time, to allow students to collect data on change over time, and to conduct experiments where time can be fast-forwarded. These capabilities allow for rich learning experiences and the ability to conduct experiments that may take too much time to use for learning or assessment purposes.

Mislevy, R., & Haertel, G. (2006). Implications of evidence-centered design for educational testing (Draft PADI Technical Report 17). Menlo Park, CA: SRI International.

Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2003). On the structure of educational assessments. Measurement: Interdisciplinary Research and Perspectives, 1, 3–62.

Updated: Jul. 04, 2010