Full-text resources of CEJSH and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

PL EN


2019 | 19 | 1 | 118-131

Article title

THE EFFECTIVENESS OF USING A HYBRID MODE OF AUTOMATED WRITING EVALUATION SYSTEM ON EFL STUDENTS’ WRITING

Content

Title variants

Languages of publication

EN

Abstracts

EN
Automated Writing Evaluation programs have been used extensively to assist both L2 instructors and learners to get corrective feedback and to score students’ final product of writing. Research has found that the AWE programs help in optimizing the writing output. However, little is known about the hybrid mode; use of AWE involving the evaluation of both modes instructors and the AWE program. This paper studies the effects of both modes in developing the students’ writing outputs using a small case study of 6 EFL learners. The learners were exposed to both modes where in each mode they undertook two sessions using the program. In the first phase the learners wrote an essay via MY Access and then saved their input in the program. In the second session, they revised their essays based on the feedback given from the program. In the hybrid mode, the same students in the second session revised their input as per the instructor’s feedback and then continued submitting their essays via MY Access. Results found that under the hybrid condition students significantly outscored the learners with the AWE program.

Year

Volume

19

Issue

1

Pages

118-131

Physical description

Contributors

  • Najran University
  • Najran University

References

  • Attali, Y. (2004). Exploring the feedback and revision features of Criterion. Paper presented at the Paper presented at the National Council on Measurement in Education, San Diego, CA.
  • Chapelle, CA. (1999). Research questions for a CALL research agenda: A reply to Rafael Salaberry. Language Learning & Technology, 37(1), 108-113.
  • Chen, C. F. E., & Cheng, W. Y. E. C. (2008). Beyond the design of automated writing evaluation: Pedagogical practices and perceived learning effectiveness in EFL writing classes. Language Learning & Technology, 12(2), 94-112.
  • Coniam, D. (2009). A comparison of onscreen and paper-based marking in the Hong Kong public examination system. Educational Research and Evaluation, 15(3), 243-263.
  • Coniam, D. (2009). Experimenting with a computer essay-scoring program based on ESL student writing scripts. ReCALL, 21(2), 259-279.
  • Dmytrenko-Ahrabian, M. O. (2008). Criterion online writing evaluation service case study: Enhancing faculty attention and guidance. International Journal of English Studies, 10(2), 121-142.
  • El Ebyary, K., & Windeatt, S. (2010). The impact of computer-based feedback on students’ written work. International Journal of English Studies, 10(2), 121-142.
  • Hamp-Lyons, L., & Lumley, T. (2001). Assessing language for specific purposes. Language Testing, 18(2), 127-132.
  • Herl, H. E., O’Neil Jr, H. F., Chung, G. K. W. K., & Schacter, J. (1999). Reliability and validity of a computer-based knowledge mapping system to measure content understanding. Computers in Human Behavior, 15(3-4), 315-333.‏
  • Hoang, G., & Kunnan, A. J. (2016). Automated writing instructional tool for English language learners: A case study of MY Access. Language Assessment Quarterly, 13(4), 359-376.
  • Kozna, R. B., & Johnston, J. (1991). The technological revolution comes to the classroom. Change, 23(1), 10-22.
  • Lai, Y.-H. (2010). Which do students prefer to evaluate their essays: Peers or computer program. British Journal of Educational Technology, 41(3), 432-454.
  • Lavolette, E., Polio, C., & Kahng, J. (2014). The accuracy of computer-assisted feedback and students’ responses to it. Language Learning & Technology, 19(2), 50-68. Retrieved 1 January 2019 from http://llt.msu.edu/issues/june2015/lavolettepoliokahng.pdf
  • Lee, C., Wong, K., Cheung, W., & Lee, F. (2009). Web-based essay critiquing system and EFL students’ writing: A quantitative and qualitative investigation. Computer Assisted Language Learning, 22(1), 57-72.
  • Liu, S., & Kunnan, A. J. (2016). Investigating the application of automated writing evaluation to Chinese undergraduate English majors: A case study of WriteToLearn. CALICO Journal, 33(1), 71-91. https://doi.org/10.1558/cj.v33i1.26380
  • McNeill, P., & Chapman, S. (2005). Research Methods. London: Taylor & Francis Ltd.
  • O’Neil, H. F., & Schacter, J. (1997). Test specifications for problem-solving assessment. CSE Technical Report 463. Los Angeles: Center for the Study of Evaluation.
  • Rudner, L., & Liang, T. (2002). Automated essay scoring using Bayes’ Theorem. The Journal of Technology, Learning, and Assessment (J.T.L.A), 1(2), 1-22.
  • Shermis, M., & Burstein, J. (2003). Automated Essay Scoring: A Cross Disciplinary Perspective. Mahwah, NJ: Lawrence Erlbaum Associates.
  • Stevenson, M., & Phakiti, A. (2014). The effects of computer-generated feedback on the quality of writing. Assessing Writing, 19(1), 51-65.
  • Truscott, J. (1999). The case for grammar correction in L2 writing classes: A response to Ferris. Journal of Second Language Writing, 8(2), 111-122.
  • Tuzi, F. (2004). The impact of e-feedback on the revisions of L2 writers in an academic writing course. Computers and Composition, 21(2), 217-235.
  • Ussery, R. (2007). Criterion Online Writing Evaluation Case Study: Improving Writing Skills across the Curriculum. Retrieved 11 August 2009 from http://www.ets.org/Media/Products/Criterion/pdf/4216_NCarol3.pdf
  • Warschauer, M., & Ware, P. (2008). Learning, change, and power. In J. Coiro, M. Knobel, C. Lankshear, & D. J. Leu (Eds.), Handbook of Research on New Literacies (pp. 215-239). Mahwah, NJ: Lawrence Erlbaum Associates.

Document Type

Publication order reference

Identifiers

YADDA identifier

bwmeta1.element.desklight-aefc46cb-f68f-412f-aee0-82e425bfda4d
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.