Lessons Learned from a Large Course Redesign in the Social Sciences: Can technology be used to improve efficiency in the use of departmental resources, maintain quality and rigor in the course material, and achieve consistent assessment across graders?

Concurrent Session 1
Streamed Session

Brief Abstract

Using data gathered from the same course offering over a 3 year period, this study provides insight into:  the use of rubrics and training techniques to achieve consistent assessment across graders; a comparison of online versus face to face offerings and the use of technology to enhance or detract from the quality and rigor;  and methods/tools for faculty members to effectively manage a large online course.  

Extended Abstract

Approximately three years ago, my colleagues and I redesigned a large section of a traditional “face-to-face” (FTF) political science class in order to offer it in online.  Confronted with increasingly limited faculty resources, yet increasing undergraduate demand for this particular class (PS 305 Law and Justice in the American Political Process), we began studying how this course could be offered using available technology to a large number of students online using one faculty member with assistance from multiple graduate students.  Our research, therefore, examines the reliability and validity of using graduate students to grade distance education course assignments by evaluating graduate student graders’ assessments compared to core faculty assessments of the same students’ work.   Our research also examines potential qualitative differences in delivering the material in the online versus the face-to-face format. In response to criticism that online courses may not have the same rigor or quality as face-to-face courses, we ensured that the online course mirror the face-to-face course and at the same time reflect lessons from online pedagogy.  There are some differences in the two course formats, but not in content, course learning objectives or assessments.  As examples, in the redesign, we incorporated group projects in both the face-to-face and online sections to enhance collaboration.  Google presentation and Moodle forums were used in both sections to enhance accountability in group participation.

After delivering the online course as a pilot in Spring 2014, we offered the face-to-face redesign in Fall 2014 and the full online redesign in Spring 2015.  Using detailed rubrics to guide graders in their assessment of short essays and identifications, we found that although calibrating grades is possible, the variation in grading was admittedly sub-optimal.  Building on that initial promise, we found in subsequent semesters that grading could be effectively standardized despite expanding the types of assignments that were graded.  Improvement was derived primarily from increasing the rigor of the grading rubrics and recalibrating after an initial assessment of a subset of the assignment. Furthermore, the increased reliability and validity of grading occurred despite varying the research design from an across-subjects design (what was the average grade for assignments determined independently by a graduate student’s versus a core faculty member?) to a within subjects design (did graduate students and faculty arrive at the same grades for the same student assignments?). The consistent results are valuable because they suggest our methods for promoting grading reliability and validity were not generated by any particular research methodology.  Presently, we are offering the second iteration of the face-to-face redesign with the same graders from the prior semester. The same graduate students are tasked with grading the same assignments that were given to students in both learning environments.  Although we cannot complete our analysis until the end of the current semester, preliminary evaluation indicates that assessment is consistent across delivery format. 

We believe our results provide useful insight into:  (1) the use of rubrics and training techniques to achieve consistent assessment across graders/teaching assistants; (2) the use of technology to create collaboration in both the online and face to face learning environment; (3) how technology may potentially enhance or detract from the quality and rigor; and (4) the development of methods/tools that faculty members can use to effectively manage a large online social science course.  Most importantly, our results indicate that with proper training graduate students can grade many assignments for online courses on par with that traditionally performed by core faculty, thus increasing the ability to offer high demand courses in non-traditional formats.