Comparability Between Paper-Based and Computer-Based Testing for ACT

Three recent studies demonstrated that paper and online tests measure the same knowledge and skills, but students who test online tend to perform slightly better than students who test on paper, especially on the English, reading, and writing tests. ACT will equate scores across modes as needed to ensure that ACT scores can be treated as interchangeable regardless of testing mode.

Summary

Over the past year, ACT has been investigating how best to offer online testing during a Saturday national testing event. Prior research on the comparability between paper and online testing for the ACT test indicated that students testing on computers tended to perform slightly better than students testing on paper, especially on the English and reading tests.

The studies conducted in 2014 and 2015 used the Pearson TestNav online testing platform, which is currently used for state and district online testing. However, on national testing dates, online testing will occur on the TAO platform developed by OAT. In part due to concerns that test scores from different online testing platforms might exhibit different mode effects (e.g., due to differences in the interface and item rendering), a series of mode comparability studies was conducted during the 2019-2020 academic year.

The three studies took place on the national testing dates in October 2019, December 2019, and February 2020. Only the February 2020 study included writing as an optional component. In each study, the same form was administered on paper and online, but a different form was used for each study. As in earlier mode comparability studies, students were randomly assigned to test on paper or online, and all participants received college-reportable scores.

In general, the results were quite consistent across studies and with prior ACT mode comparability studies. Item-level analyses indicated that students who tested online were more likely to respond correctly to most items and they were less likely to leave items blank, in particular near the end of the English and reading tests. Score equivalence analyses revealed that online scores were higher than paper scores on average, especially on the reading and English tests.

Across studies, the mode effect ranged from 0.16 to 0.22 standard deviations in reading and from 0.10 to 0.13 in English. The mode effects ranged from 0.04 (nonsignificant) to 0.12 in science, and they ranged from -0.01 (nonsignificant) to 0.06 in math. In the February 2020 study, the average online writing score was 0.39 standard deviations higher than the average paper score.

The construct equivalency analyses indicated that paper and online testing appeared to be comparable in terms of correlations among the subject areas, effective weights, internal consistency reliability, and confirmatory factor analysis model fit and average factor loadings. In all cases, the online test was equated to the paper test to ensure that scores reported from this study would be comparable regardless of testing mode.