An empirical review of the MOOC assessment design and development: are you confident that students earned certification?
Concurrent Session 4
Designing effective Massive Open Online Course (MOOC) assessments is a challenge due to class sizes, diverse learner needs, and platform constraints. We will discuss methods we adopted when designing three MOOC courses as well as strengths and weaknesses and suggestions for improvement based on research and empirical data.
Brief intro of MOOC
The Massive Open Online Course (MOOC) platform has become a critical topic in online education. The rapid rise to prominence is due to the exceptional number of students an online environment will accommodate without global boundaries. Elite institutions such as Harvard, Stanford, and MIT have opened their academic courses through their own platform EDX, Coursera, and Udacity. Many observers in the academic publications believe MOOCs have the potential to revolutionize higher education (Friedman, 2012). With 1700 plus active courses, Coursera is the largest MOOC provider. EdX is the second with 1300 courses, followed by FutureLearn, a UK-based MOOC provider, with 480 courses. Even though some critics say that MOOC is another typical ed.tech hype (Cuban, 2015), MOOCs are still progressing at a rapid pace (Shah, 2016).
The University of Houston has been a Coursera partner since the learning management system was launched in 2014. Working with faculty from diverse disciplines, our instructional design team has launched three MOOC courses, with another under development and scheduled to launch in fall 2017. In this presentation, members of the instructional design team will discuss the design decisions and their experience throughout the assessment design stages. They will discuss some enlightening issues faced, how they addressed them and what they learned during the process.
How MOOC assessment is different from examinations in regular college course
In his personal experience and research, Cuban argues that MOOCs will continue to be the teacher-centered model for two reasons. First, the lower cost of delivering professor-centered instruction in MOOCs, as opposed to face-to-face instruction, where procedural knowledge and skills are the expectation. Second faculty who are in tenure and promotion track positions at universities tend to focus on conducting research rather than teaching classes. Professors are less likely to invest in communities of learning, develop student-centered courses or even hybrid MOOCs; thus leaving it a heavily teacher-centered model.
Stephen Dowen along with George Siemens, who initiated the first MOOC in Canada, indicated that MOOC assessments method are not as articulate as traditional college courses since MOOCs provide certificates, not college credits. This is one explanation for the significantly higher dropout rate in MOOCs when compared to the traditional online courses. There are two common techniques are being used to handle the assessments in MOOC. First, automated essay assessments need to be seeded with a large number of already-marked essays and it extracts the properties of high-quality essays, and then matches new essays to those properties. Second, peer assessment - essays are graded not by professors but by other course participants. This would require that every essay be graded by a large number of other participants.
Methods we used to design MOOC assessments
The instructional design team in the Office of Educational Technology has designed three MOOCs since 2014: Bilingual Brain, Math Behind MoneyBall and American Deaf Culture. The first two courses were offered for multiple sessions and the third course recently launched in late April 2017. The team designed similar assessments for all three courses: objective quizzes and exams. However, considering the individual course’s nature and focus, different approaches were applied.
The design for Bilingual brain was based on the instructor’s book with the same name. The author, a psychology professor, was also the primary instructor. The course content focuses on the discussion of bilingualism literature and explanation of cognitive process in multiple language learning. When we designed the course, we considered a significant portion of the students should have bi-lingual learning experiences, such as students studying overseas, or who faced challenges of bilingual environments, such as immigrant parents that raised children in a country where they were not native speakers. We expected learners not only eager for related knowledge, but also sought for assistance and solutions that they could apply directly. Assessment included regular multiple choice questions. Furthermore, we had a special design for discussion forum. At the end of each week, the instructor reviewed the top couple discussion threads that had largest number of replies, and recorded his feedback on those hot topics. Although discussions were not graded, learning results were effectively evaluated and assistance was provided for students with more chances to apply the newly gained knowledge.
Math behind the moneyball aimed to teach students how to use math and statistics skills for baseball, football and basketball teams to improve player selection strategy, winning rate prediction, etc. It was a practical course where students learned how to analyze real life data in Excel. To achieve this learning goal, all test questions are statistical analysis in format. For each question, students first needed to locate related data from public resources, such as NCAA historical records, then apply the correct formula and finally calculate the result in Excel. When designed multiple choice questions, we did not fill choices with imaginary numbers. Instead, various formulas were applied to same data set and results were mixed with the right answers. The assessment design emphasizes accurateness of standard answers. We kept examining quiz questions and updating correct answers when it available, for published data sometimes changed. This course used discussions as a primary help forum for students to assist each other when troubleshooting Excel.
American Deaf Culture is a 101 course in American Sign Language. The instructor was deaf and all the lecture videos were recorded in American Sign Language. Considering students from the hearing community, an interpreter translated course and his voice was synced with lecture videos. We used quizzes to evaluate student success in the class, however, the instructors put more attention on students’ participation. The instructor elaborated course materials with many of his personal experiences. For this purpose, we originally designed questions to be inserted to video lectures; unfortunately, Coursera in-video questions do not have the grading function. The discussion forum is now being used to encourage high course participation.
Considering the learners levels (all three courses were open to beginners), we decided to implement quizzes and tests as the primary assessment tool. After an initial review of completion rate, students’ feedbacks posted in discussion forum and instructor-student communications, we found this format achieved the primary learning goals. For example, some students who completed Bilingual Brain received approval from their college advisor to waive a prerequisite course.
The quiz format worked best for Math Behind Moneyball. Correct answers required students to systematically apply concepts and skills acquired for success in the course. Therefore, they needed to not only comprehend the concepts, but also analyze and synthesize the data.
We also found, although students were more self-sufficient in MOOC learning, interaction with instructor was highly appreciated in the MOOC environment and open communication greatly encouraged student participation. The weekly feedback video in Bilingual Brain received very positive responses.
For humanities course such as American Deaf Culture, we constantly debated whether automatic grading could fully examine students’ understanding as they progressed through the class. Each exam allowed unlimited attempts. A concern arose that students could easily get full points through “multiple submission strategy”. It was determined that a student tried a different answer in each submission to get the correct answer. We will have a deeper understanding after the first session is completed.
Discussion: suggestions for future
A fourth MOOC is planned to be developed in summer 2017: Spanish in US. Based on past experiences, we have developed some solutions that we will suggest for instructors as we create new MOOC assessment.
First, instructors should consider developing more detailed feedbacks responses for each answer option, especially the incorrect choices. Students will benefit more from specific and focused feedback.
Second, instructors should consider creating a pool of questions and questions sets to allow for randomization in the exams or quizzes.
Third, instructors should consider limiting the number of attempts students have on exams and quizzes. An alternative is to limit attempts within a certain timeframe (such as 1 attempt every 8 hours).
Finally, instructors should consider adding more assessment formats, such as a Peer Review Assignment, to improve students understanding and application.
Cuban, L (2012) MOOCs and Pedagogy: Teacher-Centered, Student-Centered, and Hybrids (Part 1) Retrieved on 30 April from https://larrycuban.wordpress.com/2012/12/05/moocs-and-pedagogy-teacher-centered-student-centered-and-hybrids/
Cuban, L (2012) MOOCs and Hype again Retrieved on 1st May from https://larrycuban.wordpress.com/2012/11/21/moocs-and-hype-again/
Dowen, S (2013) Assessments in MOOCs Retrieved on 1st May from http://halfanhour.blogspot.com/search?q=assessment+in+mooc
Friedman, T (2012) Come the Revolution Retrieved on 30 April from http://www.nytimes.com/2012/05/16/opinion/friedman-come-the-revolution.html
Shah, D (2016) Monetization over Massiveness: A review of MOOC Stats and Trends in 2016 Retrives on 1st May from https://www.class-central.com/report/mooc-stats-2016/