One Size Does Not Always Fit All: How to Tell If Your Rubric Works
Concurrent Session 8
Brief Abstract
This session will startup a discussion forum whether rubrics are sufficient to fairly grade multiple assignments, including the type of rubrics used for assessing and evaluating learners’ performances in online courses. Through examples the session will focus on the three rudiments of rubrics: evaluative criteria, quality definitions, and scoring strategies.
Presenters


Extended Abstract
Upon successful completion of the session, participants will be able to:
- Evaluate the application of the rubric to be applied to multiple assignments;
- Determine the type of rubric that works best for a specific assignment;
- Discuss pros and cons of rubric use in the teaching and learning context;
- Devise a plan to embed discussion board assessment criteria for the assignment during the instructional design process.
Rubrics have emerged as important tools for assessing and evaluating student performance, particularly in online courses. Rubrics provide a task description, breakdown of component skills and knowledge, and descriptions of levels of performance on a work or product quality continuum ranging from outstanding to unsatisfactory (Stevens & Levi, 2012). Establishing clear criteria and standards in a grading rubric can eliminate some of the common difficulties encountered during the performance assessment process. However, this indispensable tool may not be sufficient enough to grade discussion board messages or assignments fairly.
For many years, we have been proponents of using rubrics for assessing online discussion board activities and assignments, but as we utilized various rubrics, we detected some gaps that started raising doubts about rubrics, especially for discussion board messages. For example, in a course with weekly assignments posted in discussion format (65% grading weight) using a generic/static rubric leads to superficial assessment and becomes too prescriptive. The weekly discussion board assignments do not have uniform performance descriptors due to the different types of tasks to be completed. The nature of the assignments challenges the instructional usefulness of the static discussion board rubric (Tierney & Simon, 2004); the appropriate breakdown of skills and knowledge The more appropriate approach is to design a set of distinct scoring guidelines for each discussion board assignment that articulates fundamental criteria for each learning outcome (Stevens & Levi, 2012).
On the other hand, the same course makes effective use of two different rubrics for the final project and the associated final presentation. The separate rubrics include breakdowns of component skills and knowledge spanning clearly defined levels rated across a continuum (Stevens & Levi, 2012), which provides an opportunity for more detailed review of assignments. Additionally, the assignment-specific rubrics have some interrater reliability in peer and self-assessment.
This express workshop starts a discussion on the type of rubrics and the use of various rubrics for assessing and evaluating learners’ performance in online courses. The session will focus on the three rudiments of rubrics: evaluative criteria, quality definitions, and a scoring strategy (Popham, 1997; Popham, 2010). Using example rubrics from online courses and a review of the most recent literature on rubric design and usage, we will introduce the challenge and then engage the audience in a discussion on the following questions:
- How will we know that rubrics can and do serve a real purpose?
- Have you noticed yourself changing your mind over the scale of the criteria when you use a rubric for a specific assignment?
- How can we assure a rubric is working? How do we find out if it is reliable and valid?
- How can we select or repurpose a rubric based on the types of assignments?
- How much rubric scores or ratings accurate or useful to determine the performance tasks?
- How can we capture each leaner’s personal piece to learning and assess learners’ work from their unique perspectives and expertise?
- How can we design discussion board assessment criteria that will not inhibit learners’ creativity (checking the box for each of the expectations presented in rubric categories and criteria may result in less creativity)?
Finally, this session will capture participants’ thoughts on pros and cons, their successes and challenges with rubrics.
Workshop Outline (45 minutes)
Introductions and Overview (5 minutes)
- Introductions and agenda review
Didactic Slide Presentation: Overview of Rubrics (15 minutes)
- What is a rubric?
- The three rudiments of rubrics (key parts of a rubric)
- Types of rubrics (analytic and holistic rubrics – examples will be provided)
- Using rubrics for the assessment of learning outcomes
- Advantages, flaws, and limitations of rubrics
- Situations in which rubrics are not effective (real examples: discussion board assignments)
Open Discussion (20 minutes)
- How will we know that rubrics can and do serve a real purpose?
- Have you noticed yourself changing your mind over the scale of the criteria when you use a rubric for a specific assignment?
- How can we assure a rubric is working? How do we find out if it is reliable and valid?
- How can we select or repurpose a rubric based on the types of assignments?
- How much rubric scores or ratings accurate or useful to determine the performance tasks?
- How can we capture each leaner’s personal piece to learning and assess learners’ work from their unique perspectives and expertise? How can we design discussion board assessment criteria that will not inhibit learners’ creativity (checking the box for each of the expectations presented in rubric categories and criteria may result in less creativity)?
We will capture key points of the discussion and make available online with additional resources that participants can reference as they review their own assignment and discussion board rubrics for maximum effectiveness.
Wrap up (5 minutes)
- Take home message
- Reminder to complete the workshop evaluation
References
Popham, W.J. (1997). What’s wrong – and what’s right – with rubrics. Educational Leadership 55, no. 2: 72–5.
Popham, W. J. (2010). Everything school leaders need to know about assessment. Thousand Oaks, Calif: Corwin Press.
Stevens, D., & Levi, A.J. (2012). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning. (2nd ed.) Sterling, VA: Stylus Publishing, LLC.
Tierney, Robin & Marielle Simon (2004). What's still wrong with rubrics: focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research & Evaluation, 9(2). Retrieved June 5, 2016 from http://PAREonline.net/getvn.asp?v=9&n=2