Video analytics: Improving our measurement of engagement

Concurrent Session 4
Streamed Session

Watch This Session

Session Materials

Brief Abstract

**This session received high reviewer ratings and is runner up for Best-in-Strand.**

Video engagement is typically measured as the average percentage of a video watched in a single viewing. This approach undercounts averages by not acknowledging normal online behavior, such as repeat views. This presentation demonstrates a more accurate approach to measurement, and its potential to improve our understanding of student behavior.


Ano helped to launch the Master of Health Care Delivery Science (MHCDS) program at Dartmouth College. This program was the world's first degree program in this new field of inquiry, and Dartmouth College's first foray into low-residency, online education. Part of the MHCDS's leadership team, Ano is responsible for all educational operations, managing a team of learning designers, educational technologists and teaching assistants who support faculty and students in all aspects of course development and delivery. Ano has a Masters of Public Health from the Geisel School of Medicine at Dartmouth, and lives in Central Vermont.

Extended Abstract

Educational technologies generate vast quantities of data that have long held promise for improving both our understanding of education, and the delivery of content. Despite this promise, many analytic approaches suffer from key weaknesses. The measure of student engagement with educational videos is one such measure.

Video engagement is typically measured as the average percentage of a video that is viewed in a single sitting. An average engagement of 55 percent, for example, means that on average, in a single viewing episode a viewer watched 55 percent of the video before stopping the video. This measure and this approach suffers from several weaknesses. A major weakness that we will not discuss is that a measure of the percentage of a video viewed alone does not tell us to what extent viewers are paying attention, whether they consider the material interesting and worth their time, or to what extent they are learning and retaining new information. (Those questions can be addressed using other, more complex approaches.)

The usual method for capturing average engagement suffers from at least two major flaws. First, by using counts of views rather than viewers, the measure fails to account for repeat views by the same viewer. It is a common online video viewing practice to have a viewer return multiple times to watch the same video either in its entirety, or simply to return to continue viewing the unwatched portion. A second potential flaw is to include views of exceedingly short duration when calculating an average. For a 10-minute video, a view of three percent amounts to only 18 seconds. It is fair to assume that a view of such short duration can be explained by factors that are commonly associated with internet behavior, but not associated with viewer interest or content quality. For example, a service failure due to bandwidth constraints or mobile coverage, a viewer confirming the identity of the video and whether they had already viewed it, or a social interruption.

How much do average engagement figures change when you account for those two flaws? This presentation discusses an exploration of that question. Using a cloud-based video streaming server (Wistia) that captures the IP addresses of viewers, along with the usual set of viewer analytics including percent of the video viewed, the author compared the usual and improved measures of average engagement for 69 videos from 6 online courses delivered by the Master of Health Care Delivery Science Program at Dartmouth College. Multiple views by the same viewer, identified by IP address, were combined (so three views of 33 percent by the same viewer resulted in an actual view of 99 percent by that viewer), and views of under three percent (an arbitrary cutoff) were discounted.

The average engagement for all courses analyzed, measured using the usual approach, averaged 72.3% (range= 60% to 79%), compared to 98% (range=96% to 99%) when measured using the improved method. On average the usual measure of average engagement was 74% (range 62.7% to 80.3) of the improved measure. In some individual cases the difference was far more dramatic and yielded further information that could be of interest to educators. For example, for one 30-minute video average engagement jumped from 53% (usual method) to 120% (improved method), with a modal engagement of 99%. Moreover, 15% of viewers viewed the video 200% or more, with one viewer watching it four times. If students are segmenting their viewing of long videos into smaller portions, it follows that the longer the video in question, the bigger the potential differential between engagement figures calculated using the usual method, compared to those calculated with the improved approach.

Much of what is known and written about engagement of online students with video content has relied on the usual measure of engagement. These findings have strongly suggested a need for video content that is very short, ideally 6 minutes or less. Since many faulty find it challenging to parse their information presentation into such short segments, this goal may present a barrier to the creation of online courses. An improved measure of video engagement, that takes into account well established behaviors common among internet natives, has the potential to change our understanding of their behavior, and correspondingly allow for more appropriate standards for content creation. Using an improved measure of engagement, these findings from an online graduate degree program, and preliminary results from other online courses (including a MOOC and an undergraduate course) suggest students have an appetite for much longer videos than was previously reported by investigators using the usual measure of engagement.

This presentation will share details on the methods, findings, and insights learned from the analysis. In addition to Q and A, a brief partner exercise and larger discussion will explore the challenge of measuring engagement in online education, how it differs from face-to-face settings, why we care about it, and potential roadblocks to measurement, including privacy concerns and video platform selection and performance.