Using AI in Discussion to Scale Access to Quality Online Education
Concurrent Session 4
As online enrollments increase, many institutions are actively thinking through challenges in scaling quality. Online discussions are often the choke-point for scale. SUNY is piloting the use of an AI-powered discussion tool in online courses and will share how the system can support scaling discussion while enhancing student/faculty engagement online.
According to Babson Survey Research Group, distance education enrollments have significantly increased every year for the last fourteen years. This growth in online courses has held steady even as overall enrollment has decreased. However, while online courses are growing and providing students with more accessibility to obtain a degree, students enrolled in online programs are not graduating at the same rate as on-campus students.
Instead of steering away from online programs, institutions are embracing the demand for online learning and working to improve the quality of online education. One state system that has recently taken the initiative to provide students with access to quality online education is the State University of New York (SUNY). SUNY has a large student body of working professionals, many of them who struggle to attend on-campus classes. With a goal to make courses more accessible, SUNY launched SUNY Online; an online effort to enroll 100,000 students in an online-only degree program.
This year (2019), the Executive Director of SUNY Online, Kim Scalzo, took a step toward improving the quality of SUNY’s online program by partnering with an education technology company to conduct research on the effectiveness of A.I. in scaling the quality of online programs. In this presentation, Scalzo will present preliminary findings from the study.
The study was conducted with 25 instructors and 15,000 students across 2-year and 4-year schools at six different New York State University campuses. Courses were A/B tested; one section used the AI-supported technology tools while the control groups were a mix of traditional online discussion through a learning management system and no discussion. The study assessed four components; the time spent facilitating the online discussion, the students’ quality of discussion, students’ perception of online discussion and the faculty’s confidence in scaling the discussion.
In the first assessed component on the time faculty spent facilitating discussion, this differentiates faculty time spent on "managing" online discussion versus "engaging" with their students. Faculty today spent the majority of their time in "management" of discussion. Our treatment group looked to automate the management of online discussion via automated moderation, sorting algorithms and automated grading, allowing the faculty member to spend their time engaging with their students. This was further amplified and reinforced by A.I. in maximizing student visibility to instructor feedback.
When looking at student outcomes in the quality of discussion, the treatment group used A.I. to give students feedback on the quality of their posts using Bloom's Taxonomy as a rubric. In environments of scale, this amount of feedback is traditionally challenging for instructors to facilitate on their own.
Researchers also felt it was important to measure students’ satisfaction and perception of online discussion as many students today have mixed feelings about the value of online discussion as a result of their past experiences. This study assessed student sentiment of online discussion, which is an important proxy to understanding the degree of seriousness they enter the exercise with.
The final, and arguably the most important component assessed in the study was the faculty's confidence in scaling the discussion. As enrollment for online programs continues to grow, it’s important that faculty are able to successfully grow the elements of their course, such as discussion. This study looked to understand that as a result of utilizing A.I, if faculty members felt confident in their capability to teach more and more students over time.
Audience members in this session will:
- Learn about Self Determination Theory (SDT), the principal at the root of student motivation and driving the meaningful discussion.
- Go through a series of interactive examples of discussion and score them against SDT. Audience members will then evaluate examples of faculty-student interactions and score/evaluate them against SDT. The goal of this exercise will be to help audience members learn the design principles behind how to make a 1,000 student online course as engaging as a 10 student online course.
Attendees will also leave the session with an understanding of how an AI tool can improve the quality of online education programs, how to create an engaging discussion experience with support from automated moderation, sorting and scoring algorithms and how to effectively scale quality online programs.