One Size Does Not Always Fit All: How to Tell If Your Rubric Works

Concurrent Session 8

Session Materials

Brief Abstract

This session will startup a discussion forum whether rubrics are sufficient to fairly grade multiple assignments, including the type of rubrics used for assessing and evaluating learners’ performances in online courses. Through examples the session will focus on the three rudiments of rubrics: evaluative criteria, quality definitions, and scoring strategies.

Presenters

Kadriye O. Lewis, EdD, is the Director of Evaluation and Program Development in the Department of Graduate Medical Education at Children's Mercy Hospital CMH). She is also Professor of Pediatrics at the University of Missouri-Kansas City School of Medicine (UMKC SOM). Prior to coming to Children’s Mercy, Dr. Lewis worked for Cincinnati Children’s Hospital Medical Center (CCHMC) for more than 13 years. She played a major role in the development of the Online Master's Degree in Education Program for Healthcare Professionals. This program has developed a national and international reputation for excellence and played an important role in training future leaders in medical education. Dr. Lewis served as an education consultant to the medical center's faculty development program. She applied her educational background and academic skills to health literacy by establishing a Health Literacy Committee at CCHMC in 2007 and chaired this committee successfully for three years. Along with her many accomplishments in the area of scholarly activities, she also established the e-Learning SIG in Medical Education for the Academic Pediatrics Association (APA) in 2008 and served this group as the chair person for six years. Dr. Lewis is active in medical education research and her scholarly interests include performance-based assessment, the construction of new assessment tools as well as the improvement and validation of existing tools and methods. She also has a particular interest in instructional design and implementation of innovative technologies for curriculum delivery at many levels in healthcare education due to her extensive experience in e-learning and web-based technologies. Currently, she is involved in an NIH funded grant project on genome, various curriculum development projects for the graduate medical education programs at CMH and teaches an online/blended course in the Master of Health Professions Education program at UMKC SOM (http://med.umkc.edu/mhpe/). Dr. Lewis presents extensively at many professional meetings and conferences, and has been an invited speaker at many national and international universities.
With over 17 years of experience developing adult-centered online programs, Jennifer knows what drives student success— learning experiences that connect academic content to learners’ own lives in realistic, relevant, and relatable ways. Jennifer has managed e-learning initiatives and operations for public, private, and for-profit institutions of higher education. She has led the development of solutions focused on scaling instructional design processes using Lean and project management principles while building infrastructures to keep pace with the evolving state of online higher education. Jennifer earned a BA in Corporate Communication from Marietta College, an MAEd in Adult Education and Distance Learning from the University of Phoenix, and a Ph.D. in Educational Leadership from The University of Dayton. She is also a graduate of the OLC's Institute for Emerging Leadership in Online Education.

Extended Abstract

Upon successful completion of the session, participants will be able to:

  • Evaluate the application of the rubric to be applied to multiple assignments;
  • Determine the type of rubric that works best for a specific assignment;
  • Discuss pros and cons of rubric use in the teaching and learning context;
  • Devise a plan to embed discussion board assessment criteria for the assignment during the instructional design process.

Rubrics have emerged as important tools for assessing and evaluating student performance, particularly in online courses. Rubrics provide a task description, breakdown of component skills and knowledge, and descriptions of levels of performance on a work or product quality continuum ranging from outstanding to unsatisfactory (Stevens & Levi, 2012). Establishing clear criteria and standards in a grading rubric can eliminate some of the common difficulties encountered during the performance assessment process. However, this indispensable tool may not be sufficient enough to grade discussion board messages or assignments fairly.

For many years, we have been proponents of using rubrics for assessing online discussion board activities and assignments, but as we utilized various rubrics, we detected some gaps that started raising doubts about rubrics, especially for discussion board messages. For example, in a course with weekly assignments posted in discussion format (65% grading weight) using a generic/static rubric leads to superficial assessment and becomes too prescriptive. The weekly discussion board assignments do not have uniform performance descriptors due to the different types of tasks to be completed. The nature of the assignments challenges the instructional usefulness of the static discussion board rubric (Tierney & Simon, 2004); the appropriate breakdown of skills and knowledge The more appropriate approach is to design a set of distinct scoring guidelines for each discussion board assignment that articulates fundamental criteria for each learning outcome (Stevens & Levi, 2012).

On the other hand, the same course makes effective use of two different rubrics for the final project and the associated final presentation. The separate rubrics include breakdowns of component skills and knowledge spanning clearly defined levels rated across a continuum (Stevens & Levi, 2012), which provides an opportunity for more detailed review of assignments. Additionally, the assignment-specific rubrics have some interrater reliability in peer and self-assessment.

This express workshop starts a discussion on the type of rubrics and the use of various rubrics for assessing and evaluating learners’ performance in online courses. The session will focus on the three rudiments of rubrics: evaluative criteria, quality definitions, and a scoring strategy (Popham, 1997; Popham, 2010). Using example rubrics from online courses and a review of the most recent literature on rubric design and usage, we will introduce the challenge and then engage the audience in a discussion on the following questions:

  • How will we know that rubrics can and do serve a real purpose?
  • Have you noticed yourself changing your mind over the scale of the criteria when you use a rubric for a specific assignment?
  • How can we assure a rubric is working? How do we find out if it is reliable and valid?
  • How can we select or repurpose a rubric based on the types of assignments?
  • How much rubric scores or ratings accurate or useful to determine the performance tasks?
  • How can we capture each leaner’s personal piece to learning and assess learners’ work from their unique perspectives and expertise?
  • How can we design discussion board assessment criteria that will not inhibit learners’ creativity (checking the box for each of the expectations presented in rubric categories and criteria may result in less creativity)?

Finally, this session will capture participants’ thoughts on pros and cons, their successes and challenges with rubrics.

Workshop Outline (45 minutes)

Introductions and Overview (5 minutes)

  • Introductions and agenda review

Didactic Slide Presentation: Overview of Rubrics (15 minutes)

  • What is a rubric?
  • The three rudiments of rubrics (key parts of a rubric)
  • Types of rubrics (analytic and holistic rubrics –  examples will be provided)
  • Using rubrics for the assessment of learning outcomes
  • Advantages, flaws, and limitations of rubrics
  • Situations in which rubrics are not effective (real examples: discussion board assignments)

Open Discussion (20 minutes)

  • How will we know that rubrics can and do serve a real purpose?
  • Have you noticed yourself changing your mind over the scale of the criteria when you use a rubric for a specific assignment?
  • How can we assure a rubric is working? How do we find out if it is reliable and valid?
  • How can we select or repurpose a rubric based on the types of assignments?
  • How much rubric scores or ratings accurate or useful to determine the performance tasks?
  • How can we capture each leaner’s personal piece to learning and assess learners’ work from their unique perspectives and expertise? How can we design discussion board assessment criteria that will not inhibit learners’ creativity (checking the box for each of the expectations presented in rubric categories and criteria may result in less creativity)?

We will capture key points of the discussion and make available online with additional resources that participants can reference as they review their own assignment and discussion board rubrics for maximum effectiveness. 

Wrap up (5 minutes)

  • Take home message
  • Reminder to complete the workshop evaluation

References

Popham, W.J. (1997). What’s wrong – and what’s right – with rubrics. Educational Leadership 55, no. 2: 72–5.

Popham, W. J. (2010). Everything school leaders need to know about assessment. Thousand Oaks, Calif: Corwin Press.

Stevens, D., & Levi, A.J. (2012). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning. (2nd ed.) Sterling, VA: Stylus Publishing, LLC.

Tierney, Robin & Marielle Simon (2004). What's still wrong with rubrics: focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research & Evaluation, 9(2). Retrieved June 5, 2016 from http://PAREonline.net/getvn.asp?v=9&n=2