A Straight Line to Student Perception of Instruction: Challenges to Capturing the Student Voice

Concurrent Session 1

Brief Abstract

This session presents UCF's research examining 1,527,119 student perception of instruction responses for the years 2017-2021. We found that 66% of students “straight-lined” the form, raising the question of the validity of these data for course evaluation purposes and resulting in our institution’s re-evaluation of this process.

Presenters

Patsy Moskal is the Director of the Digital Learning Impact Evaluation in the Research Initiative for Teaching Effectiveness at the University of Central Florida (UCF) where she evaluates the impact of technology-enhanced learning and serves as the liaison for faculty scholarship of teaching and learning. In 2011 Dr. Moskal was named an OLC Fellow in recognition of her groundbreaking work in the assessment of the impact and efficacy of online and blended learning. She has written and co-authored numerous works on blended and online learning and is a frequent presenter on these topics. Patsy's co-authored book--Conducting Research in Online and Blended Learning: New Pedagogical Frontiers--with Dziuban, Picciano, and Graham, was published in August 2015. She currently serves on the OLC Board of Directors.
Charles Dziuban is Director of the Research Initiative for Teaching Effectiveness at the University of Central Florida (UCF) where has been a faculty member since 1970 teaching research design and statistics and is the founding director of the university’s Faculty Center for Teaching and Learning. He received his Ph.D. from the University of Wisconsin. Since 1996, he has directed the impact evaluation of UCF’s distributed learning initiative examining student and faculty outcomes as well as gauging the impact of online, blended and lecture capture courses on the university. Chuck has published in numerous journals including Multivariate Behavioral Research, The Psychological Bulletin, Educational and Psychological Measurement, the American Education Research Journal, the Phi Delta Kappan, the Internet in Higher Education, the Journal of Asynchronous Learning Networks, and the Sloan-C View. His methods for determining psychometric adequacy have been featured in both the SPSS and the SAS packages. He has received funding from several government and industrial agencies including the Ford Foundation, Centers for Disease Control, National Science Foundation and the Alfred P. Sloan Foundation. In 2000, Chuck was named UCF’s first ever Pegasus Professor for extraordinary research, teaching, and service and in 2005 received the honor of Professor Emeritus. In 2005, he received the Sloan Consortium award for Most Outstanding Achievement in Online Learning by an Individual. In 2007 he was appointed to the National Information and Communication Technology (ICT) Literacy Policy Council. In 2010, Chuck was named an inaugural Sloan-C Fellow. In 2012 the University of Central Florida initiated the Chuck D. Dziuban Award for Excellence in Online Teaching for UCF faculty members in honor of Chuck’s impact on the field of online teaching and learning. In 2017 Chuck received UCF’s inaugural Collective Excellence award for his work strengthening the university’s impact with the Tangelo Park Program and assumed the position of University Representative to the Rosen Foundation Tangelo Park and Parramore programs.

Extended Abstract

Few traditions in higher education evoke more controversy, ambivalence, criticism and, at the same time, support than student perception of instruction (SPI). Ostensibly, results from these end-of-course survey instruments serve two main functions: they provide instructors with formative input for improving their teaching and are the basis for summative profiles of professors’ effectiveness through the eyes of their students. In the academy, instructor evaluations also can play out in the high-stakes environments of tenure, promotion, and merit salary increases, making this information particularly important to the professional lives of faculty members. At the research level, the volume of the literature for student ratings impresses even the most casual observer with thousands of studies cited in sources such as Google Scholar ranging across all educational, psychological, psychometric, and discipline-related journals. The topic is important not only because of its high stakes implications, but because it addresses the growing importance of the student voice in the educational process.

Investigators at the University of Central Florida (UCF) working with the office of Academic Affairs, Faculty Senate, Student Government, Faculty Center for Teaching and Learning, and other campus organizations have been studying the protocols and processes for how students evaluate their educational experiences for over thirty years. The process is complicated and nuanced and the contemporary research cannon cites several unacceptable sources of bias: gender, race, discipline, modality, level, and many others – leading to overwhelming criticism. This session will report on the most recent work conducted by the Research Initiative for Teaching Effectiveness at UCF over a five-year period. The current end-of-course evaluation protocol used at the university is:

 

UCF Student Perception of Instruction Form

 Please rate the instructor’s effectiveness in the following areas:

1. Organizing the course:

  1. Excellent           b) Very Good        c) Good         d) Fair         e) Poor

2.Explaining course requirements, grading criteria, and expectations:

  1. Excellent           b) Very Good        c) Good         d) Fair         e) Poor

3.Communicating ideas and/or information:

  1. Excellent           b) Very Good        c) Good         d) Fair         e) Poor

4.Showing respect and concern for students:

  1. Excellent           b) Very Good        c) Good         d) Fair         e) Poor

5.Stimulating interest in the course:

  1. Excellent           b) Very Good        c) Good         d) Fair         e) Poor

6.Creating an environment that helps students learn:

  1. Excellent           b) Very Good        c) Good         d) Fair         e) Poor

7.Giving useful feedback on course performance:

  1. Excellent           b) Very Good        c) Good         d) Fair         e) Poor

8.Helping students achieve course objectives:

  1. Excellent           b) Very Good        c) Good         d) Fair         e) Poor

9.Overall, the effectiveness of the instructor in this course was:

  1. Excellent           b) Very Good        c) Good         d) Fair         e) Poor

10.What did you like best about the course and/or how the instructor taught it?

11. What suggestions do you have for improving the course and/or how the instructor taught it?

 

Originally, this study sought to develop, using classification and regression trees, a set of robust decision rules for what elements predict whether an instructor received an overall excellent rating. A version of this research has been successfully completed several times at UCF over the past decades proving useful to faculty, administrators, instructional designers, and curriculum specialists. The results have been published with positive responses. A secondary objective was to investigate the impact of the COVID crises and its generation of so many, often ill-conceived, course modalities on the student evaluation process. However, just as complexity theory has taught us about unanticipated side effects, there were surprises in store. 

The Current Study

The study was conducted on 1,527,119 student end-of-course responses for the years 2017-2021 (pre and post pandemic), gauging the overall impact by college, course level, course modality, term, class size decile, and department. Researchers examined the pattern of student responses, and specifically how many students were assigning perfect scores to their courses. For instance, if a student assigned excellent ratings to every aspect of a course, the total score would be forty-five based on ratings of all fives. Should they assign a poor rating to every item, their total score would be nine based on ratings of all ones. We termed this the straight-line or zero variance effect, where students were bypassing the rating instrument, which is online, to get to their assignments or final examination. The results were surprising. Of the 1.5+ million end-of course responses examined, 1,008,774 (66%) of the students has straight lined their evaluations, raising the question of the validity of these data for course and instructor evaluation purposes. That result held for every comparison: college, course level, course modality, term, class size decile, and department. The phenomenon was not only true for the margins, all 5s = 45, all 1s = 9, but for the other less extreme possibilities. For instance, a total score of 36 might indicate all fours, but there are any number of response patterns that could add to that score, not just ratings of all fours. However, that was not the case: the average percentage of straight lining for each total score was 45 (all 5s) = 100%, 36 (all 4s) = 93%, 27(all 3s) = 92%, 18 (all 2s) = 91%, 6 (all 1s) = 100%. The students appear to straight lining the entire range of the end-of-course rating scale. That pattern of those zero variance responses was even more interesting for the total straight-line scores: 45=67%, 36=14%, 27=12%, 18=4%, and 9=3%. Overwhelmingly, if students straightlined, they responded with all fives – an interesting behavior for speculation that we are exploring now. Even more surprising and confusing were the measurement characteristics of the two sets of responses: zero variance and those that appeared to have been responded to the way the scales were intended: 66% vs 34%. Customarily, when properly responded to, SPIs, although criticized for bias and validity, exhibit excellent psychometric properties. When those indices for the two sets were compared in this study, the results were identical – excellent reliability, sampling adequacy, item discrimination, and small standard error. If one were to examine only these properties, there would be no way to distinguish between the two response sets. From the measurement “arm chair,” the straight-line responses should have blown up, yet they did not – another finding for which we are looking for an explanation.

These findings are of great concern to the UCF administration, faculty and students. The office of Academic Affairs has asked the Vice President of Faculty Excellence to initiate a campus-wide assessment of the impact and opportunity costs of proceeding with the SPI process currently in place. We presented these findings at our Faculty Center’s Summer Workshop and are soliciting feedback from faculty regarding the SPI process. We are also presenting at national conferences and publishing this research to encourage our colleagues to investigate if this is happening on their campuses as well. If so, it is a problem that must be addressed; however, from this problem comes a critically important opportunity: How can we effectively integrate the student voice into the educational process? We have not done this well, although that student voice is loud and clear on RateMyProfessors, YouTube, Twitter, Instagram, Meta, and many other outlets. The walls of classrooms have come down and students have a new sense of entitlement about how they evaluate their educational experience. We need to find an effective way to capture their voices in an official manner as well, to help faculty and the institution continue to focus on the quality of education and the needs of all students.