Automated Teaching and Faculty Work: What Do We Know? What Should We Do?
Concurrent Session 6
As current experiments show, the automation of postsecondary teaching represents new uses of artificial intelligence (AI). It will mean changes in the skills, attitudes, and roles of the faculty, and new forms of collaboration with IT professionals. And automation will prompt variable faculty views of technological innovation and online learning.
Automation has come late to faculty work but with ubiquitous online learning, unceasing technological innovation, and pressures on postsecondary institutions, its role will increase. A host of recent studies have focused attention on the inevitable impact of “our brilliant technologies,” as Erik Brynjolfsson and Andrew McAfee named them in their influential The Second Machine Age (2014). Late last year a widely recognized study from the global consulting firm McKinsey (“Jobs Lost, Jobs Gained”) weighed the relations of displacement and adaptation prompted by artificial intelligence (AI). The New York Times referred to the McKinsey report and studies like it as signs of the “AI Boom” (as it is documented, for example, by Stanford’s new comprehensive annual “AI Index”) for workplace practices and how we think about them. The “Boom” is the context for this presentation on automation and postsecondary teaching.
Of course, automation has a history in education, from behaviorist projects like B.F. Skinner’s “teaching machines” to PLATO, the first digital courseware. By now, automated grading of multiple choice exams is well established in face-to-face, hybrid, and online teaching. This presentation explores how we are surpassing such a goal. It focuses on what challenges of innovation deriving from AI will mean for academic work? Will the increasingly sophisticated digital augmentation of faculty work lead to its replacement?
The format follows the subtitle in getting to a conversation. Thus, the first part identifies what we are learning from three experiments underway in postsecondary educational automation, backdrop in experimental practices for the questions posed in the second part:
Adaptive Learning, based on insights from the learning sciences, has introduced fresh approaches to data into online teaching, or the uses of algorithms in continuous assessment of student performance and then determination of the content and sequencing of instructional resources. Course faculty, acting as facilitators, can have roles in designs for both. Adaptive learning also often features the work of “learning engineers” (as IT professionals are sometimes called) and commercially produced courseware. Courses are presented as automated systems allowing self-paced mastery of the material, with lessons customized according to performance and abilities. The goal is personalized learning at scale meeting precise objectives. The faculty are guided by data toward targeted facilitation and course revision. A national project to “accelerate” the broad adoption of adaptive learning, managed by the Association of Public and Land Grant Universities, showed after the first year that while there has been success in recruiting faculty to participate in course design some see the format to be in conflict with instructional autonomy. We will learn about longer term meanings for faculty work as the APLU project unfolds.
Virtual Teaching Assistants, as in the “Jill Watson” robotics experiment at Georgia Tech, represents AI in what might be called a relational mode. Thus, a recent report from Tech’s Commission on Creating the Next in Education reflects its belief that the Internet has made knowledge available “at the touch of a button.” Even so, in order “to guide students through complex content domains, [to] arrange experiences that allow them to apply their budding expertise, and [to] provide effective feedback that enable them to refine and improve cognitive models at scale and with a high degree of quality, AI will be necessary.” A showcase activity has been the design and use of “Jill,” a virtual teaching assistant introduced in 2016 for a course in the online M.S. in Computer Science program at GT. Based on the IBM “Watson” platform, “Jill” began as a question answering agent for course operations, in effect a version of the “chatbots” marketed by Amazon and others. But Georgia Tech anticipates going far beyond such a limited role. “Jill” and digital TAs to follow will become “multifunctional” and educational, “partners” that “combine cognitive and metacognitive tutoring tasks normally associated with human teachers, such as coaching on open-ended projects and critical thinking development.” GT computer scientist and online program leader Ashok Goel acknowledges the obvious: “If in 10 years every student has such an assistant and every teacher has such an assistant, then the role of the teacher fundamentally changes.”
Machine Assessment of Student Writing is central to the University of Michigan’s AI-enabled personalized format for guiding student learning. The M-Write II project, launched in 2017, carries the banner of the University’s Third Century Initiative designed to demonstrate innovation across the curriculum. The goal is not to grade student essays--a feature of efforts in the past decade to automate writing assessment—but to support “learning to write pedagogy” in fields where writing is secondary to subject matter (e.g., STEM). The experiment is beginning with a handful of courses, a reminder of the virtues of being discipline specific in digital innovation, with faculty themselves (in chemistry, math, and other fields) learning enough about writing to have roles in designing the online teaching. The long term goal is to automate writing instruction across the vast lower division curriculum, or the “gateway” courses. M-Write II combines automated peer review, text analysis, and personalized feedback using E2Coach (UM’s customized system for communicating with students about their academic work) to create an algorithm-based infrastructure for teaching writing at scale. Project leader and English professor Ann Gere says that M-Write II is “about helping students learn better, and writing is a very powerful form of student engagement and learning. We’re trying to harness that power.”
The second and primary part of the session, a conversation, will proceed from questions for thinking about how the faculty, with their IT partners, might respond to automation at an “inflection point” (as Brynjolfsson and McAfee put it) in the history of technology and work. As the first part shows, promising experiments in AI are underway, each accompanied by optimistic projections for change in postsecondary education, if not its transformation. The 2017 national survey (by Inside Higher Ed and Gallup) of faculty attitudes towards online learning showed increasing enthusiasm for it. But automated teaching presents new challenges and, indeed, questions about the future of online education. Allowing for differences within the categories, there can be different responses to prospects for automation. Three are presented below in the form of questions (with some additional background) for consideration by online instructors and course designers.
1. Should instructional automation be embraced as necessary and desirable for its contributions to student learning as part of the revision of teaching in the digital age? It can offer relief from conventional assessment in very large classes and, in supporting customization (or “personalized learning”), it can offer timely data on students’ performance and needs to guide instruction or facilitation. Paradoxically, for some critics of higher education, welcoming automation to teaching can signify more attention to students (as in “student-centered” teaching), a counterweight to many institutions’ preoccupation with research productivity. Participants will be invited to propose rationales for automated teaching after efficiency and customization.
2. Should the faculty, and their IT partners, be skeptical about adaptive learning technologies on educational, professional, and institutional grounds? Professors might join the critical vanguard, as in Evgeny Morozov’s To Save Everything Click Here: The Folly of Technological Solutionism (2013). For most skeptics, nothing can replace face-to-face teaching and what it means for student engagement in learning. Automation can mean “deskilling,” or the erosion of long habits of direct encounters with students. Resisting automation from fear of the loss of academic jobs prompted media studies scholar Clay Shirky to call such professors “teamsters in tweed.” Indeed, for some professors, automation is merely a resource for neo-liberal management of the university to favor instructional economies and efficiency. So, an allied question for the session might be: “Could a teaching innovation today in adaptive learning become tomorrow an off-the-shelf online course facilitated by a low cost part-time teacher?”
3. Should the faculty and their IT partners simply observe automation and estimate its effectiveness as the projects above (and others) offer a clearer picture of what it means for teaching and learning, and for faculty work? NYU political scientist Michael Laver is ready to accept anything that relieves “pedagogical drudgery.” And that can happen “if we can only get over our atavistic fear of outsourcing.” Technology will inevitably bring change and “our job is to ride the tiger and not get eaten by it.” A 2018 report from the management consulting firm Accenture argues for recognition of the amplification of human abilities with AI. True enough, technology changes faster than people and their work habits. Thus, we can also ask: “What might convince the faculty that reconfigured roles reflecting the growth of automated teaching could turn out to be productive and satisfying ones?”
Finally, Nicholas Carr reminds us (in The Glass Cage: Automation and Us ) of our complementary educational obligations, in the design of new systems and critical reflection on their use. Carr himself focuses on the ways that automation changes the skills, attitudes, and roles of people working with machines. Universities may be slow to change, and the faculty too cautious about technology. But as Carr says: “We should be careful about what we make.”