Using Assessment to Improve Peer Review Feedback

Concurrent Session 5
Streamed Session Research

Watch This Session

Brief Abstract

As teachers, we rely on peer review to help students improve their writing. But how do we ensure peer comments are actually helpful? This study provides detailed guidelines for how to make shorter comments (21-40 words) more productive.

Presenters

Rochelle (Shelley) Rodrigo is the Senior Director of the Writing Program; Associate Professor in the Rhetoric, Composition, and the Teaching of English (RCTE); and Associate Writing Specialist (Continuing Status) in the Department of English at the University of Arizona. She researches how 'newer' technologies better facilitate communicative interactions, specifically teaching and learning. As well as co-authoring three editions of The Wadsworth/Cengage Guide to Research, Shelley also co-edited Rhetorically Rethinking Usability (Hampton Press). Her scholarly work has appeared in Computers and Composition, C&C Online, Technical Communication Quarterly, Teaching English in the Two-Year College, EDUCAUSE Quarterly, Journal of Interactive Technology & Pedagogy, Enculturation¸ as well as various edited collections. In 2021 she was elected Vice President (4-year term including President) of the National Council of Teachers of English and won the Arizona Technology in Education Association’s Ruth Catalano Friend of Technology Innovation Award, in 2018 she became an Adobe Education Leader, in 2014 she was awarded Old Dominion University’s annual Teaching with Technology Award, in 2012 the Digital Humanities High Powered Computing Fellowship, and, in 2010 she became a Google Certified Teacher/Innovator.
Catrina Mitchum is a Lecturer in the Department of English at the University of Arizona. She earned her PhD in Composition/Rhetoric and Digital Studies from Old Dominion University. In 2018, she was, collaboratively, awarded the CCCC Research Initiative Grant. Her research interests are in retention and course design of online writing classes. She has scholarly work published in The Journal of Teaching and Learning with Technology, MediaCommons and Enculturation. She teaches first year writing and professional and technical writing courses online.

Extended Abstract

Students need high-quality feedback to improve their writing (e.g. Reid, 2014; Vardi 2008). Peer feedback is a scalable solution (Nicol et al., 2014). 

To coach multiple peer feedback activities in accelerated online writing courses, instructors need a clear framework. First, instructors, and students, benefit from guidelines to assess if students are contributing too little to help their peers or themselves. Instructors, and students, also require a more robust set of models for recognizing when reviewers’ comments discuss criteria that lead to improved writing. Online technologies promise to capture all the texts generated during students’ peer review, but the peril lies in making sense of that data. 

In this study, we relied on research data analytics from Eli Review (elireview.com), a peer learning and revision app developed by writing professors at Michigan State University. Seven sections of ENGL101 and 24 sections of ENGL102 used Eli’s online platform to complete three projects each during 2019-2021 at a large, research intensive, state university in the southwest.  During the 7.5 week, fully online terms, students completed four formative feedback and revision activities per course. 

We used quantitative and qualitative methods to analyze the 13,717 comments exchanged during peer learning. Our quantitative analysis describes peer norms in comments based on word count. Word count is a blunt measure that indicates how likely a comment is to describe a problem, evaluate it, and offer a suggestion (Hart-Davidson and Meeks, 2020). Prior research establishes that comments shorter than 20 words tend to be praise or corrections, and comments longer than 41 words tend to have enough information to persuade writers to make global revisions. Comments between 21-40 words fall in the “messy middle.” This program-wide corpus of peer feedback has the following distribution by comment length: 

  • 27% of comments were shorter than 20 words (likely praise or correction)
  • 39% of comments had 21-40 words 
  • 24% of comments had 41+ words (likely long enough to persuade a writer to revise).

To better understand this “messy middle,” we conducted additional qualitative analyses to describe comment length norms across assignments and a qualitative analysis of over 4,000 student peer comments having 21-40 words. Our aim was four-fold: 

  1. gain more confidence in the quality of comments of this length
  2. curate comment models that reflect student language
  3. reflect on effects on review task design in influencing comment length and modify assignments accordingly
  4. establish word count indicators for each assignment based on program norms that can guide instructors' interventions in future terms

Shorter peer review comments (21-40 words) may be useful for the reviewee; this study provides more detailed guidelines for making them more productive.  

Interactivity 

Besides providing a classic IMRAD style presentation (Introduction, Methods, Results and Discussion), we will also prompt attendees to interact with our information and data; specifically we will prompt attendees to:

  1. Provide feedback on a document;
  2. Code their feedback using some of our coding schemas; and
  3. Outline peer review assignment prompts they might use in future assignments.

Takeaways 

By the end of the session, attendees will have access to:

  • Reference list about peer review;
  • Peer review assignment prompt guidelines; and
  • Outlines towards peer review prompts for future classes. 

References

Hart-Davidson, B. & Meeks M.G. (2020, forthcoming). Feedback analytics for peer learning: Indicators of writing improvement in digital environments. In Improving Outcomes: Disciplinary Writing, Local Assessment, and the Aim of Fairness, edited by Norbert Elliot and Diane Kelly-Riley, MLA.

Nicol, D. et al. (2014). Rethinking feedback practices in higher education: A peer review perspective” Assessment & Evaluation in Higher Education, 39(1) pp. 102–22. CrossRef, http://doi.org/10.1080/02602938.2013.795518 

Reid, E. S.  (2014). Peer review for peer review’s sake: Resituating peer review pedagogy. In Peer Pressure, Peer Power: Theory and Practice in Peer Review and Response for the Writing Classroom. Eds. Steven J. Corbett, Michelle LaFrance, and Teagan E. Decker. Texas: Fountainhead Press,, pp. 217-231.

Vardi, I. (2008). The Relationship between Feedback and Change in Tertiary Student Writing in the Disciplines. International Journal of Teaching and Learning in Higher Education, 20(3): pp. 350-361.