Comparability of Computer-Based Testing and Paper-Based Testing: Testing Mode Effect, Testing Mode Order, Computer Attitudes and Testing Mode preference
With promulgation of computer technology in educational testing, computerized testing (henceforth CBT) as green computing strategy is gaining popularity due to its advantages such as effective administration, flexible scheduling and immediate feedback over its conventional paper-based testing (henceforth PBT). Since some testing programs have begun to offer both versions of a test simultaneously, the effectiveness of CBT is queried by some scholars. Regarding to this aim, this study investigated the score equivalency of a test taken by 228 Iranian undergraduate students studying at a state university located in Chabahar region of Iran to see whether scores of two administrations of testing mode were equivalent. Then, two versions of the test were administered to the participants of two testing groups on four testing occasions in a counter balanced administration sequence with four weeks interval. One-Way ANOVA and Pearson Correlation tests were used to compare the mean scores and to find the relationship of testing order, computer attitudes and testing mode preference with testing performance. Findings of the study revealed that the scores of test takers were not different in both modes and the moderator variables were not considered external factors that might affect students’ performance on CBT.
Challoner, J. (2009). 1001 Inventions that changed the world (Cassell Illustrated: 2009).
International Test Commission. (2004). International Guidelines on Computer-Based and Internet- Delivered Testing. Retrieved January 21, 2011 from http://www.intestcom.org/itc_projects.htm.
American Psychological Association (APA). (1986). Guidelines for computer-based tests and interpretations. Washington, DC: Author.
Fleming, S., Hiple, D., (2004). Distance Education to Distributed Learning: Multiple Formats and Technologies in Language Instruction. CALICO Journal, 22 (1), 63-82.
Bennett, R.E. (2002). Inexorable and inevitable: The continuing story of technology and assessment. The Journal of Technology, Learning and Assessment, 1(1), 1-24.
Pommerich M., (2004). Developing computerized versions of paper-and-pencil tests: Mode effects for passage-based tests. The Journal of Technology, Learning, and Assessment, 2(6) (2004).
Peat, M. & Franklin, S. (2002). Supporting student learning: the use of computer-based formative assessment modules. British Journal of Education Technology, Vol. 33, No. 5. https://doi.org/10.1111/1467-8535.00288.
Zhang, L., & Lau, C. A., (2006). A comparison study of testing mode using multiple-choice and constructed-response items – Lessons learned from a pilot study. Paper presented at the Annual Meeting of the American Educational Association, San Francisco, CA.
Khoshsima, H., Hosseini, M. & Hashemi Toroujeni, S.M. (2017). Cross-Mode Comparability of Computer-Based Testing (CBT) versus Paper and Pencil-Based Testing (PPT): An Investigation of Testing Administration Mode among Iranian Intermediate EFL learners. English Language Teaching, Vol 10, No 2(2017). http://dx.doi.org/10.5539/elt.v10n2p23.
Lottridge, S., Nicewander, A., Schulz, M. & Mitzel, H. (2008). Comparability of Paper-based and Computer-based Tests: A Review of the Methodology. Pacific Metrics Corporation 585 Cannery Row, Suite 201 Monterey, California 93940.
Wang, S., Jiao, H., Young, M. J., Brooks, T. E., & Olson, J. (2008). Comparability of computer-based and paper-and-pencil testing in K-12 assessment: A meta-analysis of testing mode effects. Educational and Psychological Measurement, 68, 5-24.
Scherer, R., & Siddiq, F. (2015). The big-fish-little-pond-effect revisited: do different types of assessments matter? Computers & Education, 80, 198e210. http://dx.doi.org/10.1016/j.compedu.2014.09.003.
Poggio, J., Glasnapp, D., Yang, X. & Poggio, A. (2005). A Comparative Evaluation of Score Results from Computerized and Paper & Pencil Mathematics Testing in a Large Scale State Assessment Program. The Journal of Technology, Learning and Assessment, 3(6), 5-30.
Nikou, S. A., & Economides, A. A. (2013). Student achievement in paper, computer/ web and mobile based assessment. In Proceedings of the 6th Balkan Conference on Informatics (BCI), Greece.
Wallace, P. E., and Clariana, R. B., (2000). Achievement predictors for a computer-applications module delivered via the world-wide web. Journal of Information Systems Education 11 (1) 13–18. [http://gise.org/JISE/Vol11/v11n1-2p13-18.pdf].
Chua, Y. P., & Don, Z. M. (2013). Effects of computer-based educational achievement test on test performance and test takers' motivation. Computers in Human Behavior, 29(5), 1889e1895. http://dx.doi.org/10.1016/j.chb.2013.03.008.
Norazah Mohd Nordin, N. M., Arshad, S. R., Razak, N. A., & Jusoff, K. (2010). The Validation and Development of Electronic Language Test. Studies in Literature and Language, 1, 1-7.
Mazzeo, J., Druesne, B., Raffield, P. C., Checketts, K. T., & Muelstein, A. (1991). Comparability of computer and paper-and-pencil scores for two CLEP general examinations. College Board Report No. 91-5. New York. (ERIC Document Reproduction Service No. ED344902).
Russell, M. (1999). Testing on computers: A follow-up study comparing performance on computer and on paper. Education Policy Analysis Archives, 7, 20. https://doi.org/10.14507/epaa.v7n20.1999.
Clariana, R., & Wallace, P. (2002). Paper-based versus computer-based assessment: Key factors associated with the test mode effect. British Journal of Educational Technology, 33, 593-602.
DeAngelis, S. (2000). Equivalency of computer-based and paper-and-pencil testing. Journal of Allied Health, 29(3), 161–164.
Pomplun, M., Frey, S., & Becker, D. F. (2002). The score equivalence of paper-and-pencil and computerized versions of a speeded test of reading comprehension. Educational and Psychological Measurement, 62(2), 337-354. https://doi.org/10.1177/0013164402062002009.
Hosseini, M., Zainol Abidin, M. J., Baghdarnia, M., (2014). Comparability of Test Results of Computer-Based Tests (CBT) and Paper and Pencil Tests (PBT) among English Language Learners in Iran. International Conference on Current Trends in ELT, 659-667.
Mason, B. J., Patry, M., & Berstein, D. J. (2001). An examination of the equivalence between non-adaptive computer based and traditional testing. Journal of Educational Computing Research, 24(l), 29-39. https://doi.org/10.2190/9EPM-B14R-XQWT-WVNL.
Paek, P. (2005). Recent trends in comparability studies (PEM Research Report 05-05). Available from http://www.pearsonedmeasurement.com/downloads/research/RR_05_05.pdf.
Wang, S. D., Jiao, H., Young, M. J., Brooks, T., & Olson, J. (2007). A meta-analysis of testing mode effects in grade K-12 mathematics tests. Educational and Psychological Measurement, 67(2), 219-238. tps://doi.org/10.1177/0013164406288166.
Lee, J., Moreno, K. E., & Sympson, J. B. (1986). The effects of mode of test administration on test performance. Educational and Psychological Measurement, 46, 467-473. https://doi.org/10.1177/001316448604600224.
Fulcher, G. (1999). Computerizing an English language placement test. ELT Journal, 53(4), 289-299. https://doi.org/10.1093/elt/53.4.289.
Leeson, H. (2006). The Mode Effect: A Literature Review of Human and Technological Issues in Computerized Testing. International Journal of Testing, 6(1), 1-24. https://doi.org/10.1207/s15327574ijt0601_1.
Loyd, B. H., & Gressard, C. (1984). Reliability and factorial validity of computer attitude scale. Educational and Psychological Measurement, 44(2), 501-505. https://doi.org/10.1177/0013164484442033.
Loyd, B. H, & Gressard, C. (1985). The Reliability and Validity of an Instrument for the Assessment of Computer Attitudes. Educational and Psychological Measurement, 45(4), 903- 908. https://doi.org/10.1177/0013164485454021.
Berberoglu, G. & Calikoglu, G. (1992). The construction of a Turkish computer attitude scale. Studies in Educational Evaluation, 24 (2), 841-845.
Christensen, R., & Knezek, G. (1996). Constructing the Teachers’ Attitudes toward Computers (TAC) questionnaire. Paper presented to the Southwest Educational Research Association Annual Conference, New Orleans, Louisiana, January, 1996. (ERIC Document Reproduction Service No. ED398244.
Al-Amri, S. (2007). Computer-based vs. Paper-based Testing: Does the test administration mode matter. Proceedings of the BAAL Conference, 2007.
Al-Amri, S. (2008). Computer-based testing vs. paper-based testing: A comprehensive approach to examining the comparability of testing modes. Essex Graduate Student Papers in Language and Linguistics, 10, 22–44. Retrieved January 28, 2012 from http://www.essex.ac.uk/linguistics/publications/egspll/volume_10/pdf/EGSPLL10_2244SAA_web.pdf.
Green, T. & Maycock, L. (2004). Computer-Based IELTS. Research Notes, Issue 8, pp. 3-6.
Hashemi Toroujeni, S.M. “Computer-Based Language Testing versus Paper-and-Pencil Testing: Comparing Mode Effects of Two Versions of General English Vocabulary Test on Chabahar Maritime University ESP Students’ Performance”. Unpublished thesis submitted for the degree of Master of Art in Teaching. Chabahar Marine and Maritime University (Iran) (2016).
Khoshsima, H. & Hashemi Toroujeni, S.M. (2017). Transitioning to an Alternative Assessment: Computer-Based Testing and Key Factors related to Testing Mode. European Journal of English Language Teaching, Vol 2, Issue 1 (2017). http://dx.doi.org/10.5281/zenodo.268576.
Coniam, D. (2006). Evaluating computer-based and paper-based versions of an English language listening test. ReCALL, 18, 193-211. https://doi.org/10.1017/S0958344006000425.
Salimi, H., Rashidy, A., Salimi, A. H., & Amini Farsani, M. (2011). Digitized and non-Digitized Language Assessment: A Comparative Study of Iranian EFL Language Learners. International Conference on Languages, Literature and Linguistics, (IPEDR), vol.26. IACSIT Press, Singapore.
Mojarrad, H., Hemmati, F., Jafari Gohar, M., and Sadeghi, A., (2013). Computer-Based Assessment (CBA) Vs. Paper/Pencil-Based Assessment (PPBA): An Investigation into the Performance and Attitude of Iranian EFL Learners' Reading Comprehension. International journal of Language Learning and Applied Linguistic World, 4 (4), 418 428.
Al-Amri, S. (2009). Computer based testing vs. paper based testing: Establishing the comparability of reading tests through the revolution of a new comparability model in a Saudi EFL context. Thesis submitted for the degree of Doctor of Philosophy in Linguistics. University of Essex (UK).
Flowers, C., Do-Hong, K., Lewis, P., & Davis, V. C. (2011). A comparison of computer-based testing and pencil-and-paper testing for students with a read- aloud accommodation. Journal of Special Education Technology, 26(1), 1-12. https://doi.org/10.1177/016264341102600102.
Higgins, J., Russell, M., & Hoffmann, T. (2005). Examining the effect of computer-based passage presentation on reading test performance. Journal of Technology, Learning, and Assessment, 3(4). Retrieved July 5, 2005, from http://www.jtla.org.
Boo, J. (1997). Computerized versus paper-and-pencil assessment of educational development: Score comparability and examinee preferences. Unpublished PhD dissertation, University of Iowa, USA.
Authors who submit papers with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
- By submitting the processing fee, it is understood that the author has agreed to our terms and conditions which may change from time to time without any notice.
- It should be clear for authors that the Editor In Chief is responsible for the final decision about the submitted papers; have the right to accept\reject any paper. The Editor In Chief will choose any option from the following to review the submitted papers:A. send the paper to two reviewers, if the results were negative by one reviewer and positive by the other one; then the editor may send the paper for third reviewer or he take immediately the final decision by accepting\rejecting the paper. The Editor In Chief will ask the selected reviewers to present the results within 7 working days, if they were unable to complete the review within the agreed period then the editor have the right to resend the papers for new reviewers using the same procedure. If the Editor In Chief was not able to find suitable reviewers for certain papers then he have the right to reject the paper.
- Author will take the responsibility what so ever if any copyright infringement or any other violation of any law is done by publishing the research work by the author
- Before publishing, author must check whether this journal is accepted by his employer, or any authority he intends to submit his research work. we will not be responsible in this matter.
- If at any time, due to any legal reason, if the journal stops accepting manuscripts or could not publish already accepted manuscripts, we will have the right to cancel all or any one of the manuscripts without any compensation or returning back any kind of processing cost.
- The cost covered in the publication fees is only for online publication of a single manuscript.