Bridging the Gap in Internal Quality Matters Course Reviews: Interpreting Trends to Inform Practices

Concurrent Session 6
Streamed Session Research

Watch This Session

Session Materials

Brief Abstract

Many institutions employ the Quality Matters peer review process for online course evaluation. Although the review process itself is standardized, the implementation process and results vary by the institutional context. The current study explores the relationship between scoring trends and the culture and practices at one public land grant university.

Presenters

Nicole Schmidt is a Course Support Specialist at the Office of Digital Learning and a PhD candidate in the Second Language Teaching and Acquisition (SLAT) program, both at the University of Arizona. She has taught learners of English in Spain, the Netherlands, Japan, and the United States over the past decade. She currently lives in Tucson, Arizona, where she researches the use of digital technologies in the ESL university writing classroom.
Janet Smith serves as an Instructional Designer leading quality assurance initiatives at the University of Arizona with the Office of Digital Learning. She manages a multitiered and collaborative quality assurance process to ensure that courses developed for UA Online are designed for student success and engagement. Janet works with partners across campus to integrate best practices around course design, copyright, UDL, and accessibility into the instructional design process and leads the Quality Matters program for the university. She received her bachelor’s degree in Elementary Education from the University of Arizona, her master's degree in Educational Leadership in Higher Education from Northern Arizona University, and a graduate certificate in Educational Technology from Northern Arizona University. In her free time, Janet enjoys spending time with her family and friends, cooking, and practicing and teaching yoga.

Extended Abstract

At many higher education institutions, professional development for online instructors includes a routine course evaluation, examining how effective teaching methods are supported by the course design. Online course evaluation has in fact become a benchmark for quality assurance in online learning, and Quality Matters™ (QM) is one of the few nationally recognized providers of research based institutional support in the form of faculty-centered peer reviews. QM’s empirically developed rubrics encourage consistency and incite discussion on the underlying principles of course design and instructional practices for online learning (Baldwin, Chin, & Hsu, 2017). Although QM provides a structured process for implementing course reviews, each institution is ultimately responsible for the implementation of this process and the results of the review. Considering this, the lack of attention to the role that institutional context plays in scoring trends for QM course evaluations is notable in the literature.

The proposed session addresses this gap by exploring the scoring trends for internal QM evaluations over the past five years at one public land grant university in the southwestern United States and bridging these trends to local course evaluation and teacher support procedures. Data from internal QM reviews from 2015-2019 were analyzed in this pilot study, and trends were found in standards which were met and not met. These findings were triangulated with institutional records to chart the development of the review process and assess how internal factors shaped the outcomes of the course evaluations. Preliminary findings will be shared during the presentation, and suggestions will be offered for how to implement a review process and teacher support procedures that facilitate successful outcomes in QM-based internal reviews.

The research that has been done to date on the scoring of QM evaluations encompasses large scale studies of multiple institutions (McMahon, 2016; Zimmerman, 2010, 2013), small scale course comparisons within a single university department (Little, 2009), and comparisons between QM evaluations and student evaluations in online graduate programs (Kwon, 2017). However, most of these studies focus only on the standards that were met or not met, the learner response to particular course design features, or revisions based on course evaluation feedback. Interestingly, the influence of the institutional contexts in which the online courses were developed and evaluated is largely absent in these studies. By highlighting this relationship, the current study shows how even standardized practices can vary in response to local ecologies.

The presentation will present findings and engage attendees by posing questions regarding online course evaluation practices and challenges at their home institutions that they can answer using Slido. Small group reflection will be structured as a “Now what?” question, challenging attendees to see how our findings might be applied to their contexts and goals. This will be followed by a ten-minute share out of their reflection and Q&A session to share opportunities and challenges related to online course evaluation and instructor support. Each audience member will leave the presentation with insight into the role of institutional context in the implementation of a standardized course review procedure. They will also deepen their perspectives on the issue by reflecting on their own institutional practices and how these practices compare with the presenters’ and other attendees’ stories.