Now that’s a Good Question: Evaluating Our Online Exams

Concurrent Session 2

Session Materials

Brief Abstract

How well do our online exam questions assess our students’ capabilities? Investigate ways to evaluate the effectiveness of test questions. Explore alternatives to traditional multiple choice formats that maintain the advantage of automated grading. 

Presenters

Paul has taught computer science, primarily computer programming, at the community college level since 2002. Before that, he worked as a software developer and manager for 20 years. He worked for a variety hardware and software companies including, Harris, Rational, and Hewlett-Packard. At Tarrant County College he served as department chair for three years. He has become the Northwest Campus Blackboard guru, and technology mentor. He has chaired TCC's eLearning Faculty Advisory Council. He is a frequent conference and professional development speaker on online learning and learning management systems. Paul was a member of Blackboard's Product Development Partnership and an exemplary course reviewer. For the third year, he is serving as a reviewer for OLC Accelerate.

Extended Abstract

Today’s students have instant access to Google and online copies of publishers’ test banks. This creates a challenge for online instructors to create exam questions that accurately assess a student’s abilities. How do you know if a question is really measuring what it is intended to? And, if it isn’t, how do you create questions that do effectively evaluate student abilities? This presentation will identify common traps for online test questions and look at the problems with standard publisher provided multiple choice tests.

Test item analysis is a tool instructors use to evaluate the effectiveness of individual test questions. Most Learning Management System’s item analysis tools provide difficulty and discriminant information. We will see how these can help you identify problematic questions. Instructors who study the item analysis of their tests often find that many of their multiple choice questions have a low or even negative discriminant. A negative discriminant means that students who received low scores on the test as a whole, did better on that question than did students who received high scores overall.

Multiple choice questions suffer from a high probability of guessing the correct answer which is often increased by poor distractors. Even teachers who understand how ineffective most multiple choice questions are, continue to use them because they are often provided with our textbooks and are easy to grade. Most LMS and testing systems provide alternative forms of questions that are easy to create and offer the advantages of automatic grading.  Some of these are fill in the blank, numeric computation, matching, ordering, and multiple answer.

We will look at the pros and cons of these alternative question formats and look at transforming standard multiple choice questions into more effective alternatives.