Resolving Diminishing Engagement in Repeated Peer Review

Concurrent Session 1
Streamed Session

Watch This Session

Brief Abstract

Our research has shown that when asked to perform repeated peer reviews over the course of a semester, students' engagement drops significantly, especially among those that are initially highly engaged. In this session, we will present this phenomenon, and then brainstorm ways to address this concern.

Additional Authors

David Joyner is the Associate Director for Student Experience in Georgia Tech's College of Computing, overseeing the administration of the college's online Master of Science in Computer Science program as well as its new online undergraduate offerings. He has developed and teaches CS6460: Educational Technology, CS6750: Human-Computer Interaction, and CS1301: Introduction to Computing, all online.

Extended Abstract

Peer review is a common part of online learning environments. Not only has research shown that peer review is a valuable learning activity for both the reviewer and the reviewee, but peer review allows massive open online courses and other resource-constrained online environments to scale: adding more students in need of reviews necessarily adds more reviewers. With this unique alignment between pedagogy and scalability, along with new technological advances allowing on-demand and self-paced peer review, this construct has become a mainstay in online courses.

However, in our experience using peer review in an online at-scale graduate program, we have seen a common refrain from students: they feel they are putting more into peer review than their classmates, and thus getting less out of it by comparison. To investigate whether this is a perceptual issue or an actual concern, we performed a study where we investigated three semesters of two classes that each used repeated peer review. We wanted to see if students' commitment to peer review waned over time. We discovered evidence that it does: over the course of the semester, students' engagement in assigned peer review tasks dropped tremendously. More troubling, it was students initially highly engaged in peer review (as assessed by length of review and time spent on the review) that saw the most significiant drop: by the end of the term, students who initially were highly invested were barely more engaged than those who started the semester disengaged.

Peer review is still a valuable learning activity, and still plays a strong role in facilitating grading at scale. However, if the drop-off in engagement is this steep, then its value to reviewers and reviewees will drop over time. In this session, we'll present our specific data on this phenomenon, and then lead a brainstorming session to consider different ideas for addressing this issue. As part of this brainstorming session, we'll divide the audience into groups based on attendance; each group will be given a different set of constraints under which to brainstorm a solution. For example, in a MOOC, giving human feedback on peer reviews is unscalable; in a for-credit course, manually grading peer reviews may be possible, but may introduce other issues; and so on. Then, we'll reconvene to discuss the solutions.

This presentation is based on the paper "Eroding Investment in Repeated Peer Review: A Reaction to Unrequited Aid?" by David Joyner and Alex Duncan, from the 2020 Hawaii International Conference on Education.