The Impact of Automated Writing Evaluation on Iranian EFL Learners’ Essay Writing: A Mixed-Methods Study
محورهای موضوعی : آموزش زبان با کمک فن اوریReza Bagheri 1 , Roya Mohammadi Yeganeh 2
1 - Department of English Language and Literature, University of Qom
2 - Department of English Language and Literature, University of Qom
کلید واژه: automated writing evaluation, essay writing, mixed-methods study, process writing ,
چکیده مقاله :
While writing skill is extensively studied in EFL contexts, more in-depth research is needed to explore how technology can assist its pedagogy. The present study aimed to investigate the impact of using an automated writing evaluation on Iranian EFL learners’ essay writing. Learning how to reduce errors (in an EFL context) by being corrected at the moment and being exposed to different examples regarding that error in the learners’ new texts through automated writing evaluation (AWE) tends to be the significance of this study. To this end, 50 Iranian EFL learners who were studying at the University of Qom, were randomly chosen. The sample included 25 females and 25 males, whose ages ranged from 19 to 25. The participants were given a pre-test before using AWE software. They were given a topic to write about as a pre-test. After the treatment, an IELTS Task 2 was utilized as a posttest. The IELTS writing band descriptors were used to evaluate the writings. The ANCOVA results showed a remarkable improvement in the essay writing of the EFL learners using an AWE software (i.e., Grammarly). The analysis of interview data revealed that the learners were more enthusiastic about using the AWE feedback because they were corrected while they were writing their essays. Since AWE is discovered to be a helpful device to promote learners’ writing skills, students would also be inspired to become associated with such online learning environments and utilize them earnestly and productively. This research also discovered the learners who got feedback from the AWE device got more prosperous but they also started to ask their teacher to provide more feedback to have AWE feedback and traditional feedback combined. The findings have implications for language teachers, material developers, and curriculum designers.
While writing skill is extensively studied in EFL contexts, more in-depth research is needed to explore how technology can assist its pedagogy. The present study aimed to investigate the impact of using an automated writing evaluation on Iranian EFL learners’ essay writing. Learning how to reduce errors (in an EFL context) by being corrected at the moment and being exposed to different examples regarding that error in the learners’ new texts through automated writing evaluation (AWE) tends to be the significance of this study. To this end, 50 Iranian EFL learners who were studying at the University of Qom, were randomly chosen. The sample included 25 females and 25 males, whose ages ranged from 19 to 25. The participants were given a pre-test before using AWE software. They were given a topic to write about as a pre-test. After the treatment, an IELTS Task 2 was utilized as a posttest. The IELTS writing band descriptors were used to evaluate the writings. The ANCOVA results showed a remarkable improvement in the essay writing of the EFL learners using an AWE software (i.e., Grammarly). The analysis of interview data revealed that the learners were more enthusiastic about using the AWE feedback because they were corrected while they were writing their essays. Since AWE is discovered to be a helpful device to promote learners’ writing skills, students would also be inspired to become associated with such online learning environments and utilize them earnestly and productively. This research also discovered the learners who got feedback from the AWE device got more prosperous but they also started to ask their teacher to provide more feedback to have AWE feedback and traditional feedback combined. The findings have implications for language teachers, material developers, and curriculum designers.
References
Al-Inbari, F.A.Y., & Al-Wasy, B.Q.M. (2023). The impact of automated writing evaluation (AWE) on EFL learners' peer and self-editing. Educ Inf Technol (Dordr), 28(6), 6645-6665. doi: 10.1007/s10639-022-11458-x.
Attali, Y. (2004). Exploring the feedback and revision features of criterion. Journal of Second Language Writing, 14(3), 191-205.
Attali, Y., & Burstein, J. (2006). Automated essay scoring with e-rater® V.2. Journal of Technology, Learning, and Assessment, 4(3). https://ejournals.bc.edu/index.php/jtla/article/view/1650
Barrot, J. S. (2021). Using automated written corrective feedback in the writing classrooms: effects on L2 writing accuracy. Computer Assisted Language Learning, 36(4), 584-607. doi:10.1080/09588221.2021.1936071
Burstein, J., & Marcu, D. et al., (2003). Developing technology for automated evaluation of discourse structure in student essays. In M. D. Shermis & J. C. Burstein (Eds.), Automated Essay Scoring: A Cross-disciplinary Perspective (pp. 209-230). Hillsdale.
Chen, C.E., & Cheng, W.E. (2008). Beyond the design of automated writing evaluation: Pedagogical practices and perceived learning effectiveness in EFL writing classes. Language Learning and Technology, 12(2), 94-112.
Deane, P. (2013). On the relation between automated essay scoring and modern views of the writing construct. Assessing Writing, 18(1), 7-24. doi:10.1016/j.asw.2012.10.002.
Dikli, S., & Bleyle, S. (2014). Automated essay scoring feedback for second language writers: How does it compare to instructor feedback? Assessing Writing, 22, 1–17, doi:10.1016/j.asw.2014.03.006.
Dörnyei, Z. (2007). Research methods in applied linguistics: Quantitative, qualitative, and mixed methodologies. Oxford University Press.
Fan, N. (2023). Exploring the effects of automated written corrective feedback on EFL students’ writing quality: A mixed-methods study. Sage Open, 13(2). doi:10.1177/21582440231181296
Ferris, D. (2004). The “Grammar Correction” debate in L2 writing: Where are we, and where do we go from here? Journal of Second Language Writing, 13(1), 49-62. doi: 10.1016/j.jslw.2004.04.005.
Grimes, j., & Warschauer, M.et al. (2010). Utility in a fallible tool: A multi-site case study of automated writing evaluation. The Journal of Technology, Learning, and Assessment 8(6). https://ejournals.bc.edu/index.php/jtla/article/view/1625/1469
Hoon, T, B. (2006). Reforming ESL writing instruction in tertiary education: The writing center approach. The English Teacher, 35(3), 1-14.
Liao, L. (2015). Using automated writing evaluation to reduce grammar errors in writing. ELT Journal, 70(3), 308-319. doi:10.1093/elt/ccv058.
Hyland, K., & Hyland, F. (2006). Feedback on second language students' writing. Language Teaching, 39(2), 83-101. doi:10.1017/S0261444806003399
Kern, R., & Warschauer, M. (2000). Introduction: Theory and practice of network-based language teaching. In M. Warschauer & R. Kern (Eds.), Network-based language teaching: Concepts and practice (pp. 1-19). Cambridge University Press.
Knoch, U., Rouhshad, A., Oon, S. P. & Storch, N. (2015). What happens to ESL students' writing after three years of study at an English medium university? Journal of Second Language Writing, 28, 39-52. doi:10.1016/j.jslw.2015.02.005.
Li, J., Link, S., & Hegelheimer, V. (2016). Rethinking the role of automated writing evaluation (AWE) feedback in ESL writing instruction. Second Language Writing, 27,1-18. doi:10.1016/j.jslw.2014.10.004
Lipnevich, A., & Smith, J. (2009). Effects of differential feedback on students' examination performance. Journal of Experimental Psychology Applied, 15(4), 319-33, doi:10.1037/a0017841.
Shermis, D., & Burstein, J. (2003). Applications of computers in assessment and analysis of writing. System, 2(8), 74-32.
Shim, S. S., Kiefer, S. M., & Wang, C. (2013). Help-seeking amongst peers: The role of goal structure and peer climate. The Journal of Educational Research, 106(4), 290-300. doi:10.1080/ 00220671.2012.692733.
Storch, N. The impact of studying in a second language (L2) medium university on the development of L2 writing. Journal of second language writing, 18(2), 103-118. doi: 10.1016/j.jslw.2009.02.003
Liu, O. L., Lee, H. S., & Linn, M. C. (2010). Multifaceted assessment of inquiry-based science learning. Educational Assessment, 15(2), 69–86. doi: 10.1080/10627197.2010.491067
Tang, J., & Rich, C. S. (2017). Automated writing evaluation in an EFL setting: Lessons from China. JALT CALL Journal, 13(2), 117-146.
Wang, Y. J., Shang, H. F., & Briody, P. (2013). Exploring the impact of using automated writing evaluation in English as a foreign language university students’ writing. Computer Assisted Language Learning, 26(3), 234–257.
Warschauer, M., & Ware, P. (2006). Automated writing evaluation: Defining the classroom research agenda. Language Teaching Research, 10(2), 157-180. doi: 10.1191/1362168806lr190
Warschauer, M., & Grimes, D. (2008). Utility in a fallible tool: A multi-site case study of automated writing evaluation. Journal of Technology, Learning, and Assessment, 8(6). https://www.researchgate.net/publication/41628129_Utility_in_a_Fallible_Tool_A_Multi-Site_Case_Study_of_Automated_Writing_Evaluation
Wilson, J., & Czik, A. (2016). Automated essay evaluation software in English language Arts classrooms: Effects on teacher feedback, student motivation, and writing quality. Computers & Education, 100, 94–109. doi:10.1016/j.compedu.2016.05.004
Zhang, Z., & Hyland, K. Student engagement with teacher and automated feedback on L2 writing. Assessing Writing, 36, 90-102. doi:10.1016/j.asw.2018.02.004.