Creating a Sustainable Model to Assess Online Program Quality in a Professional Graduate Program

Concurrent Session 1
Streamed Session

Watch This Session

Session Materials

Brief Abstract

Educators strive to provide high-quality experiences to their students – ones that lead to positive outcomes. Balancing the demands associated with a dynamic educational landscape with stakeholder questions about value can be challenging. This paper describes an online course auditing process to assess quality in a graduate professional program.


Sharon Stoerger (Ph.D.) is the Assistant Dean of Instructional Support and Assessment in the School of Communication and Information (SC&I) at Rutgers University. She possesses a Master’s in Business Administration (M.B.A.), a Master’s in Information Science, and a Ph.D. in Information Science with a focus on pedagogical practices, emerging technology, and social informatics. Her research and teaching interests include gender and computerization, instructional technologies, management in information organizations, and entrepreneurship. Before becoming Assistant Dean, Sharon was the Director of the Information Technology and Informatics (ITI) program at SC&I. She was also a co-founder of the Women in ITI student organization.

Extended Abstract



Higher education is in a state of flux, and its landscape is rapidly changing. While the attention of faculty may be divided by the research and service requirements associated with their positions, teaching remains at the center of their efforts.  At the same time, the workplace is rapidly changing and employers now place new expectations on and expect different competencies of our students. Employers have been vocal in arguing that college graduates are not workplace-ready, and educators recognize the need to equip students with flexible and adaptable skills.

Further, the public’s perception of higher education and its level of trust in these institutions is a problem. Obtaining a degree from a higher education institution in the United States can be an expensive investment. As a result, questions about the value and quality of the learning experience have been raised by the public. 

Organizations that have not traditionally participated in the education space have launched initiatives designed to change how we educate college-age and adult learners. Beyond the walls of the university, non-degree learning opportunities, such as certification programs, bootcamps, and apprenticeships have emerged. They are often touted as viable alternatives to formal education. 

Many institutions and professional programs are attempting to navigate this fluid environment and provide their students with meaningful and high-quality learning experiences. However, systematic efforts to assess the impact of these traditional and non-traditional offerings, including methods to improve teaching practices within those domains have been lacking. This education session examines efforts conducted within an online graduate professional program to link these assessment activities to course and ultimately program improvement.

The Problem

Students can be a rich source of information about course and program offerings. In fact, feedback from students helped to launch this project. Their concerns ranged from inaccurate assignment due dates to conflicting policies to the lack of content organization in the learning management system (LMS). While this form of feedback was recorded, it was not done so in a systematic way, nor was it not producing clear insights that could lead to change. These data were leading us to make assumptions without any evidence.

In the past, student evaluations have been used to assess the quality of instruction and the achievement of learning outcomes. This is beginning to change as questions continue to surface about the reliability of the data obtained through student feedback alone. Many of our stakeholders – students, parents, campus administrators, and the public – use the results of our assessment practices to make decisions. These range from renewing instructor contracts to applying to particular schools to allocating funds to new initiatives. Obtaining multiple forms of data to gain understanding about the educational experience associated with our programs is imperative.

The program selected for this study prides itself on having a foundation that is built on integrity and student success. Students were pointing to areas suggesting that there might be problems; thus, we needed more data to determine where and how to make improvements. More specifically, we needed to create a sustainable and scalable process to audit the online courses in the program.

Developing an Online Course Audit Process

The term “quality” can take on a plethora of meanings. In a higher education context, particularly in recent times, quality has become equated with an equally vague term – student success. Assessments used to measure student success tend to concentrate on graduation rates and attainment levels and are considered evidence of quality. Questions have been raised about quality and the appropriateness of commonly accepted metrics for student success. To respond to these concerns, the goal of this project was two-fold:

  • Be proactive instead of reactive to course quality issues; and

  • Design learning experiences that lead to positive student outcomes.

Audit Tools

Resources that informed the development of the building blocks for our auditing process embraced the general online standards articulated by organizations that are committed to advancing quality in online education. These included guides developed by Quality Matters (QM), the Online Learning Consortium (OLC), the Open SUNY Course Quality Review Rubric, and the research led by Christiane Reilly at the University of Minnesota (i.e., Authentic Learning @UMN). After reviewing these resources, the challenge was to determine what met our definition of quality. What categories could reveal possible pain points to resolve in our online courses? What constitutes as evidence for the program and its stakeholders?

Several discussions were conducted about key syllabi content, course structure components, and relevant checklist categories. This work led to the development of a rubric to evaluate the level of quality of the online courses. The first version of the rubric – the one used to pilot test the process – included six evaluation categories:

  • Course syllabus

  • Learning objectives

  • Assessment & measures

  • Course design & content

  • Course overview & introduction

  • Course technology

Each category included sub-category items to examine in closer detail during the review.

Information Collection and Organization

Syllabi for courses in the online professional program are collected each semester. These are posted to a shared online repository that is accessible to faculty and staff at the school. Using the rubric and its evaluation categories as a guide, we conducted a pilot study using Spring 2019 data. We randomly selected eight online courses to test the usability of the rubric and the process. For our assessment purposes, the rubric highlighted areas for improvement within the selected courses. It also brought to light common elements that were absent across multiple courses and were identified as instructor training opportunities that could be launched in the next stages of this project.

Online Course Audit Version 2.0

On the whole, the process validated our initial concerns about online course quality issues in selected courses. It provided evidence of missing elements, such as the inclusion of policies and alignment between course assignments and learning objectives, as well as the lack of a structure that would enable students to easily navigate the course in the LMS. Thus, the results of the pilot study revealed the need to provide online instructors with greater guidance in course syllabi construction and online course site development.

The next stage – version 2 – involves expanding the audit process to include all online courses in the program, establishing a review cycle, and sharing the results with the faculty. To begin, the plan is to share the results with the program’s Curriculum Committee. The goal is to use the information as a way to illustrate the connections between online course quality and student success. Further, the process and its findings will be used to launch discussions with the faculty about course quality expectations. The goal is to create a process that leads to online course and program improvements, while providing students with a learning experience that prepares them to be information professionals in a dynamic, fast-paced workplace environment.


Attendees will be invited to share information such as the following:  

  • Definitions of quality

  • Assessment challenges they’ve faced evaluating online courses

  • Strategies they have implemented to address or overcome those challenges

  • Feedback about the auditing process and associated resources

Polling, chat, and voice will be tools to encourage interaction during the presentation. 


Attendees will learn about the development of an online course auditing program that is sustainable - one that could be implemented or adapted to meet the needs of their courses, programs, and institutions.