WRITE CENTER
  • Home
  • Webinars
  • Resources
  • Blogs
  • Research

Writing Assessment


Alves, R. A., & Limpo, T. (2015). Progress in written language bursts, pauses, transcription, and written composition across schooling. Scientific Studies of Reading, 19(5), 374-391.

Andrade, H. G., & Boulay, B. A. (2003). Role of rubric-referenced self-assessment in learning to write. The Journal of Educational Research, 97(1), 21-30.​​

Andrade, H. L., Du, Y., & Mycek, K. (2010). Rubric‐referenced self‐assessment and middle school students’ writing. Assessment in Education: Principles, Policy & Practice, 17(2), 199-214.

Applebee, A. N. (2011). Issues in large-scale writing assessment. Journal of Writing Assessment, 3(2), 81-98.

Bashir, A. (2020). The impact of formative assessment in developing writing skill. Educational Research and Innovation, 3(2).

Behizadeh, N., & Engelhard Jr, G. (2011). Historical view of the influences of measurement and writing theories on the practice of writing assessment in the United States. Assessing Writing, 16(3), 189-211.

Bennett, R. E., Deane, P., & W. van Rijn, P. (2016). From cognitive-domain theory to assessment practice. Educational Psychologist, 51(1), 82-107.

Brown, G. T., Glasswell, K., & Harland, D. (2004). Accuracy in the scoring of writing: Studies of reliability and validity using a New Zealand writing assessment system. Assessing Writing, 9(2), 105-121.

Bulté, B., & Housen, A. (2014). Conceptualizing and measuring short-term changes in L2 writing complexity. Journal of Second Language Writing, 26, 42-65.

Chen, V., Olson, C. B., & Chung, H. Q. (2020). Understanding proficiency: Analyzing the characteristics of secondary students’ on-demand analytical essay writing. The Journal of Writing Assessment, 13(1).
  • This study investigated the different characteristics of not-pass, adequate-pass, and strong-pass text-based, analytical essays written by middle and high school students.. Results revealed the use of relevant summary was an important difference between not-pass and adequate-pass essays where significantly more adequate-pass essays used summary in a purposeful rather than general way. 

Conijn, R., Martinez-Maldonado, R., Knight, S., Buckingham Shum, S., Van Waes, L., & van Zaanen, M. (2020). How to provide automated feedback on the writing process? A participatory approach to design writing analytics tools. Computer Assisted Language Learning, 1-31.

Crossley, S. A., & McNamara, D. S. (2014). Does writing development equal writing quality? A computational investigation of syntactic complexity in L2 learners. Journal of Second Language Writing, 26, 66-79.

Crossley, S. A., Roscoe, R., & McNamara, D. S. (2014). What is successful writing? An investigation into the multiple ways writers can write successful essays. Written Communication, 31(2), 184-214.

Crossley, S. A., Weston, J. L., McLain Sullivan, S. T., & McNamara, D. S. (2011). The development of writing proficiency as a function of grade level: A linguistic analysis. Written Communication, 28(3), 282-311.

Crusan, D., Plakans, L., & Gebril, A. (2016). Writing assessment literacy: Surveying second language teachers’ knowledge, beliefs, and practices. Assessing Writing, 28, 43-56.

Deane, P. (2013). On the relation between automated essay scoring and modern views of the writing construct. Assessing Writing, 18(1), 7-24.

Deane, P., Song, Y., van Rijn, P., O’Reilly, T., Fowles, M., Bennett, R., ...& Zhang, M. (2019). The case for scenario-based assessment of written argumentation. Reading and Writing, 32(6), 1575-1606.

Eckes, T. (2012). Operational rater types in writing assessment: Linking rater cognition to rater behavior. Language Assessment Quarterly, 9(3), 270-292.

Espin, C., Wallace, T., Campbell, H., Lembke, E. S., Long, J. D., & Ticha, R. (2008). Curriculum-based measurement in writing: Predicting the success of high-school students on state standards tests. Exceptional Children, 74(2), 174-193.

Fathi, J., Afzali, M., & Parsa, K. (2021). Self-assessment and Peer-assessment in EFL Context: An Investigation of Writing Performance and Writing Self-efficacy. Critical Literary Studies, 3(1, Autumn and Winter 2020-2021), 211-232.

Ferretti, R.P. & Graham, S. (2019). Argumentative writing: theory, assessment, and instruction, Read Writ. (32) 1345. DOI 10.1007/s11145-019-09950-x
  • Despite the early emergence of oral argumentation, written argumentation is slow to develop, insensitive to alternative perspectives, and generally of poor quality. These findings are unsettling because high quality argumentative writing is expected throughout the curriculum and needed in an increasingly competitive workplace that requires advanced communication skills. In this introduction, we provide background about the theoretical perspectives that inform the papers included in this special issue and highlight their contributions to the extant literature about argumentative writing.

Ferris, D., & Lombardi, A. Collaborative Placement of Multilingual Writers: Combining Formal Assessment and Self-Evaluation.

Fisher, D., Frey, N., Bustamante, V., & Hattie, J. (2020). The Assessment Playbook for Distance and Blended Learning: Measuring Student Learning in Any Setting. Corwin Press.

Fisher, D., Frey, N., & Pumpian, I. (2011). No penalties for practice. Educational Leadership, 69(3), 46-51.
​
Fitzgerald, J., Olson, C. B., Garcia, S. G., & Scarcella, R. C. (2014). Assessing bilingual students' writing.

Graesser, A. C., McNamara, D. S., Louwerse, M. M., & Cai, Z. (2004). Coh-Metrix: Analysis of text on cohesion and language. Behavior Research Methods, Instruments, & Computers, 36(2), 193-202.

Graham, S., Collins, A. A., & Rigby-Wills, H. (2017). Writing characteristics of students with learning disabilities and typically achieving peers: A meta-analysis. Exceptional Children, 83(2), 199-218.

Graham, S., Hebert, M., & Harris, K. R. (2015). Formative assessment and writing: A meta-analysis. The Elementary School Journal, 115(4), 523-547.

Ketter, J., & Pool, J. (2001). Exploring the impact of a high-stakes direct writing assessment in two high school classrooms. Research in the Teaching of English, 344-393.

Kim, Y. S. G., & Petscher, Y. (2020). Influences of individual, text, and assessment factors on text/discourse comprehension in oral language (listening comprehension). Annals of Dyslexia, 1-20.

Kim, Y. S. G., Petscher, Y., Uccelli, P., & Kelcey, B. (2019). Academic language and listening comprehension—Two sides of the same coin? An empirical examination of their dimensionality, relations to reading comprehension, and assessment modality. Journal of Educational Psychology.

Kim, Y. S. G., Schatschneider, C., Wanzek, J., Gatlin, B., & Al Otaiba, S. (2017). Writing evaluation: rater and task effects on the reliability of writing scores for children in Grades 3 and 4. Reading and Writing, 30(6), 1287-1310.

Knoch, U. (2011). Rating scales for diagnostic assessment of writing: What should they look like and where should the criteria come from?. Assessing Writing, 16(2), 81-96.

Krishnan, J., Black, R., & Olson, C. B. (2020) The power of context: Exploring teachers’ formative assessment for online collaborative writing. Reading and Writing Quarterly. https://doi.org/10.1080/10573569.2020.1764888.
  • In this multiple-case study that took place in two states, we investigate (1) the common and unique contextual factors that shape ELA teachers’ formative assessment beliefs and practice, and (2) the challenges they face when engaging in ongoing assessment while students write together online in their classrooms.
  • This work illustrates the unique contextual factors that shape teachers’ beliefs about and engagement with formative assessment, specifically for online collaborative writing.

Kulprasit, W. (2021). Assessment for Learning (AFL): Its Role in L2 Writing Contexts. The New English Teacher, 15(1), 1-1.
​
Lamb, J. (2018). To boldly go: Feedback as digital, multimodal dialogue. Multimodal Technologies and Interaction, 2(3), 49.

Leitão, S. (2003). Evaluating and selecting counterarguments: Studies of children's rhetorical awareness. Written Communication, 20(3), 269-306.

Liu, F., & Stapleton, P. (2014). Counterargumentation and the cultivation of critical thinking in argumentative writing: Investigating washback from a high-stakes test. System, 45, 117-128.

Liu, M., Shum, S. B., & Kitto, K. (2021). Combining Factor Analysis with Writing Analytics for the Formative Assessment of Written Reflection. Computers in Human Behavior, 106733.

MacArthur, C. A., Jennings, A., & Philippakos, Z. A. (2019). Which linguistic features predict quality of argumentative writing for college basic writers, and how do those features change with instruction?. Reading and Writing, 32(6), 1553-1574.

McMaster, K., & Espin, C. (2007). Technical features of curriculum-based measurement in writing: A literature review. The Journal of Special Education, 41(2), 68-84.

Moradi Khazaee, Z., Dowlatabadi, H. R., Amerian, M., & Fathi, J. (2021). The Effect of Flipping a Foreign Language Writing Course on Writing Performance and Writing Motivation. Journal of Teaching Language Skills.
​
Nazzal, J. S., Olson,C. B., & Chung, H. Q. (n.d). Differences in academic writing across four levels of community college composition courses. Teaching English in the Two Year College, 47,263-296.

Ong, J., & Zhang, L. J. (2010). Effects of task complexity on the fluency and lexical complexity in EFL students’ argumentative writing. Journal of Second Language Writing, 19(4), 218-233.

Palermo, C., & Thomson, M. M. (2018). Teacher implementation of self-regulated strategy development with an automated writing evaluation system: Effects on the argumentative writing performance of middle school students. Contemporary Educational Psychology, 54, 255-270.

Parr, J. M., & Timperley, H. S. (2010). Feedback to writing, assessment for teaching and learning and student progress. Assessing writing, 15(2), 68-85.

Perin, D., & Lauterbach, M. (2018). Assessing text-based writing of low-skilled college students. International Journal of Artificial Intelligence in Education, 28(1), 56-78.

Plakans, L. (2009). Discourse synthesis in integrated second language writing assessment. Language Testing, 26(4), 561-587.

Plakans, L., & Gebril, A. (2013). Using multiple texts in an integrated writing assessment: Source text use as a predictor of score. Journal of Second Language Writing, 22(3), 217-230.

Rezaei, A. R., & Lovorn, M. (2010). Reliability and validity of rubrics for assessment through writing. Assessing writing, 15(1), 18-39.

Roscoe, R. D., Wilson, J., Johnson, A. C., & Mayra, C. R. (2017). Presentation, expectations, and experience: Sources of student perceptions of automated writing evaluation. Computers in Human Behavior, 70, 207-221.

Saliu-Abdulahi, D., & Hellekjær, G. O. (2020). Upper secondary school students’ perceptions of and experiences with feedback in English writing instruction. Acta Didactica Norden, 14(3), 35-sider.

Schaefer, E. (2008). Rater bias patterns in an EFL writing assessment. Language Testing, 25(4), 465-493.

Stevenson, M., & Phakiti, A. (2014). The effects of computer-generated feedback on the quality of writing. Assessing Writing, 19, 51-65.

​​​Tate, T., & Warschauer, M. (2019). Keypresses and mouse clicks: Analysis of the first national computer-based writing assessment.  Technology, Knowledge, and Learning.  DOI: 10.1007/s10758-019-09412-x.
  • To better understand students’ digital writing skills, we take advantage of the information provided by computer-based assessments—keyboard and mouse activity data. We examine the relationship between students’ use of the keyboard and mouse during the assessment and students’ writing achievement. 
  • We found that the number of keypresses had a distinct and direct effect on writing achievement scores, controlling for word count. We also identified several different patterns of keyboard and mouse activity on the computer-based NAEP assessment.

​Tate, T., Warschauer, M., & Abedi, J. (2016).  The effects of prior computer use on computer-based writing:  The 2011 NAEP writing Assessment, Computers & Education, 101, 115-131. DOI 10.1016/j.compedu.2016.06.001.
  • We examine the relationship between reported prior use of computers and students’ achievement on the first national computer-based writing assessment in the United States, the 2011 National Assessment of Educational Progress (NAEP) assessment.
  • Using data from over 24,100 eighth grade students, we found that prior use of computers for school-related writing had a direct effect on writing achievement scores on the computer-based NAEP assessment. One standard deviation increase in prior use led to a 0.14 and 0.16 standard deviation increase in mean and scaled writing achievement scores respectively, with demographic controls and jackknife weighting in our SEM analysis. We also looked at earlier NAEP assessments and found that prior computer use did not positively affect the earlier pen and paper-based writing assessments.​

Wardle, E., & Roozen, K. (2012). Addressing the complexity of writing development: Toward an ecological model of assessment. Assessing Writing, 17(2), 106-119.

Warschauer, M., & Grimes, D. (2008). Automated writing assessment in the classroom. Pedagogies: An International Journal, 3(1), 22-36.

Weigle, S. C. (2007). Teaching writing teachers about assessment. Journal of Second Language Writing, 16(3), 194-209.
​
Wen, M. L., & Tsai, C. C. (2006). University students’ perceptions of and attitudes toward (online) peer assessment. Higher Education, 51(1), 27-44.

West-Puckett, S. (2016). Making classroom writing assessment more visible, equitable, and portable through digital badging. College English, 79(2), 127-151.

Wilson, J., Chen, D., Sandbank, M. P., & Hebert, M. (2019). Generalizability of automated scores of writing quality in Grades 3–5. Journal of Educational Psychology, 111(4), 619.

Wilson, J., Huang, Y., Palermo, C., Beard, G., & MacArthur, C. A. (2021). Automated Feedback and Automated Scoring in the Elementary Grades: Usage, Attitudes, and Associations with Writing Outcomes in a Districtwide Implementation of MI Write. International Journal of Artificial Intelligence in Education, 1-43.
​
Wilson, J., Roscoe, R., & Ahmed, Y. (2017). Automated formative writing assessment using a levels of language framework. Assessing Writing, 34, 16-36.

Yeh, S. S. (1998). Validation of a scheme for assessing argumentative writing of middle school students. Assessing writing, 5(1), 123-150.

WRITE Center:  Writing Research to Improve Teaching and Evaluation

The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305C190007 to University of California, Irvine. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education.
Picture
© COPYRIGHT 2019. ALL RIGHTS RESERVED.
Photo used under Creative Commons from Yves Sorge
  • Home
  • Webinars
  • Resources
  • Blogs
  • Research