Leveraging MOOC Data to Improve Faculty-Built Courses

Concurrent Session 8

Session Materials

Brief Abstract

This session examines the difficulty of assessing and improving the learner experience in traditional faculty-built online courses. We will emphasize methods of collecting useful data for quality management in situations where student feedback and/or success data is difficult to obtain. We will present data from team-built MOOCs as an example of a useful proxy for traditional measures of student success.

Presenters

Graham holds a BS in Printing and Applied Computer Science as well as an MS in Human-Computer Interaction from RIT. He currently manages a team of instructional technologists and a media development team at the Rochester Institute of Technology (RIT). His primary focus in this role is the improvement of RIT's existing course build processes, the improvement of currently implemented technology, and the integration of new technology. Prior to his work in academic technology and media development he was an academic lecturer with a track record of engaging and educational classroom experiences. His primary focus is on user research, functional prototype development, and front-end programming. He also has extensive experience in development for manufacturing, quality, and engineering document management and this has given him a unique perspective on the approach to and management of the academic experience at RIT.

Extended Abstract

This session examines the difficulty of assessing and improving the learner experience in traditional faculty-built online courses. We will emphasize methods of collecting useful data for quality management in situations where student feedback and/or success data is difficult to obtain. We will present data from team-built MOOCs as an example of a useful proxy for traditional measures of student success.

RIT has two distinct models when it comes to online course design and development. MOOCs are designed, built, and delivered in cooperation with faculty (course team model), whereas traditional online courses are designed, built, and delivered solely by the faculty member responsible for teaching the course (faculty-built model). One of the characteristics associated with faculty-built courses is a lack of visibility into learner feedback and quality assessment data available to the technology and support teams; that is to say, in traditional online courses, faculty hold course evaluations confidential.

For MOOC course builds, RIT has made a deliberate decision to use the same support services and many of the same processes that are used for more traditional course offerings at the University. This approach provides opportunities to closely examine the learner experience within the MOOC as it relates to a specific process or service and broadly generalize the outcomes associated with that process or service for the greater good of the RIT community.

In this session, we will establish the challenges of assessing the quality of faculty-built courses. Then we will present a post-mortem process that RIT uses for MOOCs that are built and designed by a course team of support staff in the Teaching and Learning Services department at RIT. Session attendees will be asked to look at real anonymized data from course post-mortems and vote on potential next steps that the team should take to improve online course quality university-wide. Finally, we will present some tips for using the data to change faculty perceptions around course production.

We will present MOOC design and development as one example of continual course quality assessment. However, this approach can be generalized to any situation involving two sets of courses that have similar media assets and course activities, but different levels of access to quality metrics. This may also inform the design of faculty-built courses where quality metrics are shared with support staff.

RIT has used quality metrics to refine closed captioning procedures and services, to improve the design of course templates within the LMS, and many other course implementation details. These metrics have also helped us convince faculty to change their online teaching practices, to increase our media production capability, and has given us confidence to experiment and refine our course production methods.

We have found that the post-mortem process we use is valuable in improving the quality of the learner experience at RIT. The use of post-mortems and proxy data has a low barrier to entry because initial course experiences don’t need to be set up to test any specific technology or design. Data collected from the initial course runs can guide course revisions, inform the course design process overall, and improve the learner experience.