Adaptive Learning in Online Nursing Education
Concurrent Session 1
Adaptive learning platforms deliver content in a personalized way to students based on prior knowledge adjusting content delivery based on individual preferences and differences in knowledge acquisition. The purpose of this project was to compare a course developed in an adaptive learning platform to one in a traditional LMS only.
The goal of this project was pilot a graduate pathophysiology course designed to deliver content in an adaptive learning platform. The goals of the pilot were to assess student perceptions of adaptive learning, and to compare exam, assignment, and overall course grades between the pilot course and the current method of content delivery. Of additional interest was to measure engagement in the pilot course using metrics generated by Realizeit and current learning management system(LMS).
The growth in online student enrollment has grown exponentially over the last few years at this institution. Faculty are always seeking better ways to engage students in their courses, however it can be difficult to get students to actively engage in content in online courses when it is delivered in a static format. It can also be challenging to meet the needs of very diverse learners. Learners not only vary in learning styles and preferences but in the prior knowledge and experiences they bring to each class.
Faculty have long delivered course content in ways designed to meet student needs. Personalization of content and delivery of content is not a new idea (Cronbach, 1957). In face to face classes, instructors often receive visual or verbal cues from students indicating their level of understanding. Good instructors adapt their content delivery based on these cues. However, even in face to face classes, it can be difficult to meet the needs of every individual student vs the median student. These challenges are amplified in online courses. In online courses, there is no body language to read nor “real time” verbal cues available to assist instructors in gauging student understanding. Technology advances enable more and more sophisticated methods in personalization of content delivery that can be leveraged to overcome these challenges. Adaptive learning platforms have the potential to personalize content delivery based on individual differences (Flores, Ari, Inan, & Arslan-Ari, 2012). Students have demonstrated a positive reaction to adaptive learning platforms and potential positive effects on learning outcomes (Dziuban, Moskal, Cassisi, & Fawcett, 2016).
The adaptive learning platform chosen for this project (Realizeit©) uses probabilistic reasoning via Bayesian estimation techniques to give each student a personalized pathway through the instructional content. An estimate of a student’s starting point in a curriculum, course, or lesson is assessed using these algorithms and then data on student behavior and ability as they progress through the content is continuously assessed and fed back into these algorithms. Continuous updates of student measures are available to both the student and instructor, such as an estimate of knowledge acquired and what content is left to be learned. The recommended path through the content is adjusted based on the individual student’s progress through the content (Howlin & Lynch, 2014).
Nursing Pathophysiology for Nurse Educators is a foundational course in the Nurse Educator curriculum that has historically been difficult for students. Personalized or adaptive learning platforms may help instructors meet the diverse needs of students in such courses. Adaptive learning platforms use algorithms to adjust and deliver content to students based on their prior knowledge and through a continuous assessment that occurs as the student progresses through the content.
The pilot of an adaptive course in graduate nurse educator pathophysiology was an experimental study with a random assignment. The random assignment happened as a function of the usual procedure of enrollment in sections for any course in this program by student advisors. The graduate nursing educator pathophysiology course was offered in 2 sections in the Fall 1 and Fall 2 semester and in 3 sections in the Spring I and Spring 2 semester. Part 1 of this course is offered in Fall and Spring 1 and Part 2 of this course is offered in Fall and Spring 2. Students were assigned the same section in part 2 of this course as they were in part 1 per usual procedure. A total of 14 student were enrolled in the control group in Spring I with 15 student total enrolled in Fall II (1 student was out of sequence in courses and had taken part 1 in an earlier term) and a total of 10 students were enrolled in the control group in both Fall I and II. A total of 13 students were enrolled in the intervention group in and a total of 25 student were enrolled in the control group across 2 sections in Spring I and II. While this is a small sample size, for the purpose of a pilot test, 10 to 20 students per section should be sufficient (Isaac & Michael, 1995).
Course content was converted into an adaptive learning platform in one section and kept in its current format in a second section of nurse educator graduate pathophysiology. Exams/quizzes were identical in both sections. Fourteen un-proctored quizzes were administered in both control sections and Fourtenn personalized adaptive assignments were administered to the intervention group using the identical question pools as the corresponding quizzes. Four identical exams were administered in all sections Four case studies were be assigned in all sections. The cases were identical in content but not in delivery. Case content in the intervention group was delivered in the adaptive learning platform with variable generated content and 2-3 static versions of each case were assigned by the instructors to students in the control group
During fall 2018, the university was closed due to a natural disaster and students had no access to course content during this time. Exams were delivered un-proctored and significant schedule adjustments were made to both the control and intervention group. Survey data regarding perceptions was collected from the intervention and control groups at the end of the semester with additional survey questions regarding the disruption and potential impact of adaptive learning during this time were added. Data regarding grades and other interaction metrics were not collected due to changes made to the course after the disruption.
The adaptive course and control course were both run again in spring 2019. Survey data was collected regarding perceptions and data was collected regarding grades and other interaction metrics from all sections.
Data from Fall 2018 and Spring 2019 show overall satisfaction with course delivered in adaptive learning platform and interest in taking another course in an adaptive learning platform. Qualitative data analysis suggests that the adaptive learning delivery of content may have been beneficial in the fall 2018 term that was significantly disrupted due to a natural disaster.
Comparison of overall course grades and individual exam grades in Spring 2019 between the adaptive learning course and control sections show no significant differences between groups. It is hard to determine if there was no difference between groups or if it was because the number of participants was too low to determine differences. A minimum of 52 participants would be needed per group for a large effect size per a power analysis. Analysis is ongoing regarding potential instructor difference in the courses that may have impacted study findings. Analysis is also ongoing regarding interaction metrics.
This pilot was demonstrated a potentially effective way to help students learn. Student perceived greater engagement with content, more effective learning, and overall expressed interest in taking a course in an adaptive platform again. Student comments suggest that the adaptive nature of a course in an adaptive platform may have helped them adjust to unexpected challenges during a semester. While data regarding performance was not statistically significant, more research needs to be done to examine this. Data analysis regarding student engagement via interaction metrics is ongoing.