Source: Journal of Research on Technology in Education, 54:2, 249-266
(Reviewed by the Portal Team)
In this study, the authors rigorously examine the instrument most frequently used to measure attributes of digital natives:
the Digital Natives Assessment Scale (DNAS; Teo, 2013) to empirically assess the underlying framework and discuss implications for working with current learners; be they digital natives or not.
To do so, this study tests the DNAS instrument and model potential to identify attributes potentially applicable to contemporary generations through a sample of American pre-service teachers at multiple institutes of higher learning.
The overarching goal is to extend further empirically-based discussion on the digital native construct.
Purpose statement
This study sought to investigate the validity of Teo’s DNAS model with an undergraduate population in American universities.
Specifically, this study is intended to answer these research questions:
1. How well does the DNAS measure the perception of digital nativeness in American undergraduate pre-service teachers?
2. How do American undergraduate pre-service teachers perceive themselves along DNAS constructs, and how might their perceptions shift?
These questions were examined through confirmatory factor analyses on two models of the DNAS data from a sample of American pre-service teachers along with analyses of survey data on Teo’s four constructs.
Methods
The DNAS was administered to all sections of the introductory educational/instructional technology courses at three institutes of higher education, one each in the Southeast, Northeast, and Midwest United States.
The original DNAS instrument (Teo, 2013), which included 30 items, was administered using Qualtrics for measure dissemination.
Data from the full 30 items were collected to allow for comparisons of multiple models, as well as to allow for exploratory factor analyses in the event the models determined by Teo did not hold for the data being collected for this study
In addition to the DNAS items, the researchers collected both pre- and post- DNAS response self-assessments by the respondents rating themselves on Teo’s four factors (i.e., grow up with technology; comfortable with multitasking; reliant on graphics for communication; thrive on instant gratifications and rewards) as well as the general idea of being a “digital native.”
The DNAS factors for self-description were phrased as tech-savvy, multitasker, preferring graphics, and thriving on rewards.
The participants responded as to how likely they would be to describe themselves with the aforementioned descriptors using a 7-point Likert scale ranging from “extremely likely” to “extremely unlikely.”
The decision to collect responses on both pre- and post-administration of the DNAS was made so as to allow for analyses to determine if a participant’s perception of the terms changed based on the DNAS vocabulary.
Speculating that participants in this study may not have a strong understanding of what it means to be a digital native or the definition of the four specific designators, we determined that collection and analysis of these two data points for each would allow for a better understanding of the participants than a single administration.
Furthermore, the dual collection allowed for statistical analyses that would help clarify a) how the participant was self-assessing along the five constructs and b) the strength, or lack thereof, of the DNAS for measuring digital nativeness (i.e., content validity).
Participants
Participants for this study included N = 178 pre-service teachers from three higher education institutes across three distinct regions of the United States enrolled in a technology integration course for teachers.
Institutional response data for the 178 participants showed that 46.1% attended the institute in the southeastern U.S., 29.2% attended the institute in the northeastern U.S., and 20.8% attended the institute in the midwestern U.S., whereas two respondents did not provide institute responses (1.1%).
Findings and discussion
Question 1
This research explores the DNAS’s ability to measure an American pre-service teacher’s perceptions of digital nativeness in themselves.
The operationalization of the digital native construct within the population of this study proved to be problematic.
The authors were unable to definitively confirm the dimensions of the DNAS established by Teo (2013) as a four-factor model, either with 21- or 30-items.
Half of the fit indices showed poor fit to the model, whereas the remaining showed marginal fit at best.
While having an acceptable fit with Singaporean, Chinese, and Turkish populations (Huang et al., 2019; Teo, 2013, 2016; Teo et al., 2016), the DNAS did not fit the data of the American undergraduate population.
This would suggest that the digital native as a construct may be in question.
Overall, the authors find that these results further support the conclusion that the DNAS is not fully valid with the American undergraduate population of pre-service teachers.
This may indicate one of three possible interpretations.
First, the DNAS may not fully align with how the digital native construct could or should be quantified.
Secondly, the generalizability aspect of construct validity may be in question as the DNAS score loadings were not consistent across time and differing populations (Messick, 1998).
Alternately, these results potentially lend empirical credence to the idea that the digital native simply does not exist as originally envisioned.
Question 2
The authors investigated how American pre-service teachers view themselves along the designators identified by Teo in the DNAS research and in what ways, if any, those perceptions shift after taking the DNAS.
Participants were asked to self-assess the likelihood of describing themselves along the traits identified by Teo’s (2013) and the idea of being a “digital native.”
Measuring both before and after taking the DNAS allowed for analyses into how perceptions may have shifted via the terms used by Teo to describe a digital native.
The discrepancy within the sample between the DNAS model and the data further extended into the overall constructs and the idea of the digital native itself.
Before completing the DNAS, the average response on each of the four DNAS factors (having grown up with technology; comfort with multitasking; reliance on graphics for communication; and thriving on instant gratification and rewards [Teo, 2013]) was at or above slightly likely, as well as for the overall perception of being a digital native.
On the post-DNAS administration, respondents’ average response decreased on all factors except having grown up with technology.
In two cases (thriving on rewards and preferring graphics), the change in response mean was statistically significant between the pre- and post-DNAS administration.
These changes in response may suggest that the way the DNAS items were written altered respondents’ interpretations of the factors and/or the application of the term to themselves.
Overall, the authors conclude that the nature of the four traits identified by Teo and the identifier digital native are problematic.
Within this study, the conflict primarily arises from a seeming lack of content validity in the DNAS itself as many items are no longer relevant nor representative of the current digital context.
Messick (1987) explains that traditionally content validity treat tests as samples about which inferences are to be drawn or predictions made.
Specification of domain boundaries and appraisal of relevance and representativeness of the test items to the domain is the crucial consideration (Messick, 1987).
They find that the DNAS may not fully identify the nature of the contemporary learner regardless of the debatable construct validity of the digital native itself.
While this cannot conclusively negate the idea of the digital native, it provides strong empirical support for the need of further exploration.
The authors conclude that further research into how the contemporary learner learns and is taught needs to occur without the digital native tag.
These avenues of research in all domains should examine alternative frameworks that may better contextualize the contemporary learner, including digital literacy, in its various forms; digital fluency, and digital citizenship.
These domains collectively and independently hold more potential for insights into teaching and learning than the lone concept of the digital native.
Only when the nature of the contemporary learner is fully recognized can researchers design studies in all areas impacting those populations appropriately.
References
Huang, F., Teo, T., & He, J. (2019). Digital nativity of university teachers in China: Factor structure and measurement invariance of the Digital Native Assessment Scale (DNAS). Interactive Learning Environments, 1–15.
Messick, S. (1987). Validity. ETS Research Report Series, 1987(2), i-208.
Messick, S. (1998). Consequences of test interpretation and use: The fusion of validity and values in psychological assessment. ETS Research Report Series.
Teo, T. (2013). An initial development and validation of a Digital Natives Assessment Scale (DNAS). Computers & Education, 67, 51–57.
Teo, T. (2016). Do digital natives differ by computer self-efficacy and experience? An empirical study. Interactive Learning Environments, 24(7), 1725–1739.
Teo, T., Kabakc¸ı Yurdakul, I., & Ursavas ¸, O. F. (2016). Exploring the digital natives among pre-service teachers in € Turkey: a cross-cultural validation of the Digital Native Assessment Scale. Interactive Learning Environments, 24(6), 1231–1244.