Using Evaluation of Online Teaching Data to Make Meaningful Change

Concurrent Session 9
Streamed Session

Watch This Session

Add to My Schedule

Brief Abstract

You’ve completed your faculty evaluations of online teaching. Now what? Presentation goes beyond the authors’ book, Evaluating Online Teaching, to discuss how to utilize evaluation data to make meaningful changes in professional development, faculty evaluation processes, and retention of adjunct faculty teaching online.

Presenters

B. Jean Mandernach, Ph.D. is Executive Director of the Center for Innovation in Research and Teaching at Grand Canyon University. Her research focuses on enhancing student learning in the online classroom through innovative instructional and assessment strategies. In addition, she has interests in the development of effective faculty evaluation models, perception of online degrees, and faculty workload considerations. Jean received her B.S. in comprehensive psychology from the University of Nebraska at Kearney, an M.S. in experimental psychology from Western Illinois University and Ph.D. in social psychology from the University of Nebraska at Lincoln.
Thomas J. Tobin is the Director of Curriculum and Programming in the Distance Education Professional Development area of the University of Wisconsin-Madison, where he identifies trends and themes in the field of distance education; shapes the scope and focus of each year's DT&L conference in collaboration with its advisory board and staff; and directs the curriculum of the university's professional-development programs. Before joining UW-Madison, Tobin spent seven years in the Learning and Development arm of Blue Cross and Blue Shield of Illinois, and then served for five years as the Coordinator of Learning Technologies in the Center for Teaching and Learning (CTL) at Northeastern Illinois University in Chicago. Tom is also an independent faculty developer and professional consultant in State College, Pennsylvania. He is an internationally-recognized speaker and author on topics related to quality in distance education, especially copyright, evaluation of teaching practice, academic integrity, and accessibility/universal design for learning. Since the advent of online courses in higher education in the late 1990s, Tom’s work has focused on using technology to extend the reach of higher education beyond its traditional audience. He advocates for the educational rights of people with disabilities and people from disadvantaged backgrounds. Tom serves on the editorial boards of _eLearn Magazine_, _InSight: A Journal of Scholarly Teaching_, the _Journal of Interactive Online Learning_, and the _Online Journal of Distance Learning Administration_. _Re-Framing Universal Design for Learning in Higher Education_ (with Kirsten Behling) was released by West Virginia University Press in Fall 2018, and Tom is currently writing _Going Alt-Ac: A Guide to Alternative Academic Careers_ with Katie Linder and Kevin Kelly, expected in early 2019 from Stylus Press. His most recent books include _Evaluating Online Teaching: Implementing Best Practices_ (Wiley, 2015) with Jean Mandernach and Ann H. Taylor, and his comic book (yes, comic book) _The Copyright Ninja_ (St. Aubin Comics, 2017), which teaches college and university faculty members, support staff, and campus leaders about copyright, fair use, licensing, and permissions. Tom is also proud to have represented the United States on a Spring 2018 Fulbright Scholar core grant, under which he helped Eötvös Loránd University in Budapest, Hungary to develop its first faculty-development program.
Dr. Ann H. Taylor has worked in the field of distance education since 1991, focusing on learning design and faculty development. As the Assistant Dean for Distance Learning and Director of the John A. Dutton e-Education Institute at Penn State University, Ann is responsible for guiding the College of Earth and Mineral Science's strategic vision and planning for online learning. She works with faculty, administrators, stakeholders, and Institute staff to plan and implement online programs that are tailored to the needs of adult professionals worldwide. She serves on University committees focused on strategic planning, policies, and procedures related to the Penn State's distance learning initiatives and has been an active member of the University Faculty Senate since 2007, where she currently serves as its elected Secretary. Ann regularly works with University colleagues to create resources for faculty who teach online and face-to-face, and she shares her work as a frequent public speaker and author.

Extended Abstract

Context

The growth of online learning has created an opportunity to re-examine teaching practices through a scholarly lens. The review and evaluation of teaching practices in general are sometimes performed in a pro forma fashion, or only for summative reasons like promotion and tenure decisions. Donna Ellis at the University of Waterloo sees teaching-evaluation as a holistic enterprise: “teaching and its assessment should . . . be seen as scholarly activities. The review of teaching is an intentional process—one that is carefully designed, situated in context, and leads to interpreting teaching effectiveness based on multiple sources and types of evidence” (2012). Online courses offer us a rich variety of information sources from which to study and improve our teaching practices and develop our faculty.

Once colleges and universities have identified the context-specific perspectives and factors that they want to measure in their online instructors (cf. Berrett, 2017; Jacob, Stange, & De Vlieger, 2017; and Mandernach, Donnelli, Dailey, & Schulte, 2005), they design policies and instruments that shape the process of evaluating online teaching (Taylor, 2010; Tobin, Mandernach, & Taylor, 2015). This, however, is only one end of the evaluation “pipeline.”

Problem

While evaluation of teaching is widespread, effective use of evaluation data (beyond its impact on the individual faculty member being evaluated) is not. Complicating the matter is the complexity that the online format and increased reliance on geographically-remote, part-time to teach in this modality.  Participants in this session will leave knowing how to

  • how to identify and isolate evaluation factors that can inform more effective evaluation;
  • train department chairs, deans, and other senior leaders to move beyond “I know it when I see it” assessment techniques to create more proactive and useful evaluation processes, and
  • determine best practices for finding, onboarding, and retaining quality online instructors as a function of evaluation outcomes.

The session facilitators literally wrote the book on how to do this effectively. Evaluating Online Teaching was published in June, 2015, and this session provides beyond-the-book updates in three critical areas: professional development, administration support, and instructor hiring and retention.

Prerequisites

Participants should have experience or responsibility for designing, teaching, or evaluating online courses. No specific pre-requisites are required to participate, but please be prepared for a hands-on session. Bring a laptop or other Internet-browser-capable device, since you'll be evaluating sample online courses.

Approach & Outcomes

Participants in this session should come prepared to be part of the conversation. Please bring an Internet-browser-capable device (e.g., tablet or laptop) to the session. During the session, you will

  • learn strategies for making data meaningful (using evaluations of online teaching to improve teaching and learning);
  • practice how to train administrators to look at online courses effectively; and
  • define quality practices on online teaching to attract, select, and retain skilled online instructors.

References & Further Reading

Berrett, D. (2017, May 9). Students Don’t Always Recognize Good Teaching, Study Finds. Chronicle of Higher Education. http://www.chronicle.com/blogs/ticker/students-dont-always-recognize-good-teaching-study-finds/118274.

[Ellis, D.] (2012). Peer review of teaching: A holistic approach to the review of teaching. University of Waterloo, Centre for Teaching Excellence. https://uwaterloo.ca/centre-for-teaching-excellence/teaching-resources/teaching-tips/professional-development/reviewing-teaching/peer-review-teaching.

Jacob, B. A., Stange, K., & De Vlieger, P. (2017). Measuring up: Assessing instructor effectiveness in higher education. EducationNext 17(3). http://educationnext.org/measuring-up-assessing-instructor-effectiveness-higher-education/.

Mandernach, B. J., Donnelli, E., Dailey, A., & Schulte, M. (2005). A faculty evaluation model for online instructors: Mentoring and evaluation in the online classroom. Online Journal of Distance Learning Administration 8(3). http://www.westga.edu/~distance/ojdla/fall83/mandernach83.htm.

Ryalls, K., Benton, S., & Li, D. (2016). Response to “Zero Correlation between Evaluations and Learning.” IDEA Editorial Note #3. http://www.ideaedu.org/Portals/0/Uploads/Documents/Response_to_Zero_Correlation_Between_Evaluations_Teaching.pdf.

Shreckengost, J. (2013). Proactively guiding instructor performance through the use of a performance dashboard and real-time data. Presentation at the Sloan-C International Conference on Online Learning, Orlando, FL.

Taylor, A. H. (2010). A Peer Review Guide for Online Courses at Penn State. http://facdev.e-education.psu.edu/evaluate-revise/peerreviewonline.

[Taylor, A. H.] (2011). Faculty competencies for online teaching. Penn State Online. Faculty Engagement Subcommittee. https://www.e-education.psu.edu/files/OnlineTeachingCompetencies_FacEngagementSubcommittee.pdf.

Tobin, T. J. (2004). Best practices for administrative evaluation of online faculty. Online Journal of Distance Learning Administration 7(2). http://www.westga.edu/~distance/ojdla/summer72/tobin72.html.

Tobin, T. J., Mandernach, B. J. & Taylor, A. H. (2015). Evaluating Online Teaching: Implementing Best Practices. San Francisco: Jossey-Bass.