DIY Analytics to Empower Faculty Intervention

Concurrent Session 7
Streamed Session

Watch This Session

Brief Abstract

Mining gradebook data uncovers valuable information about work patterns of successful students and efficacy of course design. Finding these patterns just requires looking. And Excel.

Extended Abstract

Research into increasing student retention, stretching back more than 30 years to the work of Vincent Tinto, has affirmed that contact from a caring instructor is of unrivaled value for pulling a student back from the brink of failure. This is especially true in the relative isolation of online learning, where it is especially challenging to see student struggles. And with the course loads common for today's distance educators, few have time to check in with every student every week.

Locked away in last term's gradebook for any course, lies valuable information about the work patterns of successful students. Faculty, course designers, and administrators can analyze this data to identify the critical path to success, and then create simple filters to detect when a student veers off the path. In subsequent terms, faculty can spot students headed off-course, and intervene quickly to get the student back on course. Without these filters, most faculty in online courses won't notice a student slipping away until grades have fallen well below passing. At that point, few students can still catch up, leading to drops, withdrawals, and failure. Early detectors are especially helpful for enabling early intervention and getting students headed in the right direction from the very first week of the term.

In this workshop we will analyze a term of actual gradebook data. We will start with a raw LMS gradebook export and work through a few of the most useful Excel features including pivot tables, standard deviations, and correlations. We will set up flags or filters on various criteria, using percentiles and countif functions, and then assess various thresholds using confusion matrices and ROC curves. We will then assess the value of each filter for precision, recall, and a measure we call "workload" to ensure realistic intervention recommendations. All predictive indicators will be evaluated on week-by-week measures using AUC (area under the curve) to determine the best combinations. This approach, which we call "stacked filters," establishes a decision tree for finding all students on the path to failure, and responding with appropriate interventions.

Our results will then be tested against actual student data from the following term. And, by looking at data from other schools and terms, we will explore why different courses require different filters, thus emphasizing the need to examine your own data.

This is not a recipe for building your own institutional data-warehouse or auto-intervention system. Instead, we are focused on giving any interested faculty member the ability to explore a course's gradebook with a handful of easily accessible tools. We will offer a brief summary of research from learning analytics and educational data mining to suggest the wide variety of success indicators attendees might want to assess in their own data. These techniques enable anyone with Excel and last term's gradebook to find students headed for failure, and to do so early enough in the term to give them every opportunity to get back on track.

This workshop should be quite accessible to the math averse and those new to analytics. All the statistical concepts are basic and will be explained from the presenter's perspective as a liberal arts major.

Attendees will leave with
- an understanding of what to look for in gradebook data and how to look for it
- working understanding of Excel functions such as correlation, standard deviation, percentile, and countif
- practical understanding of a few statistical concepts such as confusion matrix, ROC curves, precision and recall
- how to build filters, combine them into models and evaluate the results