During the holiday break, I finally had the opportunity to catch up on some professional development and I’m so glad I did! Like many of us, I often see a countless number of webinars and workshops pop up on my radar throughout the year that I’d really like to attend, but despite my best efforts, some of the best ones just seem to conflict with my schedule and fall through the cracks. Nevertheless, I try to maintain a list of those that have been recorded, just in case I happen to find myself with some extra time to revisit them.
Of all these, I was most captivated by a webinar OLC offered this past Fall, “Student Motivation in the Age of AI: A Discussion on Creating Assessments that Matter,” which offered a chance to rethink our approach to assessments in a rapidly evolving educational landscape. As I listened to Dr. Chris Schunn and Dr. Ryan Straight, it felt less like a presentation and more like a conversation—a dialogue that prompted me to consider what “meaningful assessments” truly entail in the context of AI.
One of the first ideas that resonated with me was the importance of assessments that students actually care about. It’s easy to say we want to create assessments that matter, but what does that really look like, especially when AI is capable of answering many straightforward questions for our students? Dr. Schunn emphasized the role of relevance in building motivation. In other words, students should see value in what they’re doing beyond the grade. This raises a critical question:
What Makes Assessments Meaningful?
Dr. Schunn provided an interesting lens on peer review as a way to make assessments feel more dynamic. He discussed how this structure lets students engage with one another’s ideas, pushing them to think critically—not just for a grade, but to communicate something real to their peers. Imagine a task where students are asked to analyze a current issue in AI ethics. Rather than only submitting their thoughts to an instructor, they’d share their analysis with classmates, perhaps even debate contrasting viewpoints. This setup makes the assignment much more engaging because the audience is expanded, and students are accountable to more than just a gradebook.
This approach aligns well with the concept of authentic assessment, where tasks mimic real-world challenges. If students are tasked with assessing AI-generated content for credibility, they’re practicing a skill with direct applications in an AI-laden world. They’re learning not only to analyze but to discern quality—an essential skill when AI can produce content that looks convincing yet might lack depth or accuracy.
Peer Review as a Motivational Tool in the AI Age
A particularly engaging part of the conversation was the discussion around peer review in large courses. When students know their work will be read by classmates, it adds a layer of motivation that a traditional instructor-only audience doesn’t necessarily inspire. But peer review isn’t just about motivation; it’s about building critical thinking skills. Dr. Straight made an excellent point about AI’s potential role in providing preliminary feedback that can set students up for more meaningful peer-to-peer engagement.
Imagine an AI tool that reviews a draft for clarity, coherence, or basic grammar, enabling students to focus their peer review on more substantive issues like argument strength or originality. This approach could be particularly powerful in large classrooms where personalized feedback from instructors alone is limited. In this way, AI acts as a preparatory scaffold, helping students arrive at a more productive peer review experience.
The Impact of AI on Student Accountability
One of the questions Dr. Straight posed got me thinking: How does AI affect students’ sense of accountability in their work? With AI tools readily available to aid—or even complete—assignments, the challenge of motivating students to do original work becomes more complex. For instance, if students rely on AI to draft responses, are they still actively learning? Or does this diminish the engagement we hope to foster?
This challenge pushes us to think creatively. Perhaps we could design assessments where AI isn’t a shortcut but a component of the learning experience itself. For example, students could use AI to generate sample responses and then critique these examples, identifying strengths and weaknesses. By analyzing AI-generated content, they’re not only honing critical thinking skills but also learning to navigate the capabilities and limitations of these tools. This structure emphasizes metacognition—thinking about one’s own thinking—since students must reflect on why an AI-generated response does or doesn’t meet high academic standards.
Encouraging Autonomy and Relevance
Another concept that came up during the discussion was autonomy—giving students a sense of control in their learning. Dr. Schunn and Dr. Straight suggested that students are more motivated when they have choices in their assignments. For example, rather than assigning a single essay topic, we might allow students to select from several options, or even propose their own project. In an AI-rich environment, this autonomy could extend to how they use AI as well. Imagine giving students the option to experiment with AI in their initial research phases, then reflect on how it shaped their final product.
What does this look like in practice? Suppose students are tasked with researching AI’s impact on a field of their choice—education, healthcare, finance, etc. They could start by using an AI tool to gather initial data points, but the bulk of the assignment would involve analyzing, comparing, and evaluating this information, ultimately leading to an original insight that’s their own. This setup not only respects student autonomy but also emphasizes critical evaluation, which AI alone can’t provide.
Beyond Compliance: Motivating Students with Higher Standards
Perhaps one of the most compelling takeaways was the importance of moving beyond compliance with AI tools. As Dr. Straight emphasized, our goal shouldn’t be to merely “check the box” on AI usage. Instead, we should integrate AI in ways that inspire genuine intellectual engagement. For example, a peer review process that includes AI could help students get beyond surface-level critique to focus on deeper issues like argument quality and evidence credibility.
Imagine a classroom where AI facilitates initial feedback on structure or grammar, allowing students to focus their peer review on more complex aspects of the content. The discussion from the webinar sparked ideas for expanding assessments in ways that keep students motivated to think critically and deeply. By shifting the focus to critical analysis, reflection, and discussion, we’re reminding students that their work matters—that they are part of a learning community where their contributions are valued.
Final Thoughts: A New Direction for Assessments in the Age of AI
Reflecting on this conversation, it’s clear that AI doesn’t have to be a threat to student motivation—it can be a tool that helps deepen it. But doing so requires us to thoughtfully integrate AI, ensuring that it serves as a scaffold for higher-level learning rather than a shortcut. I left the webinar with new ideas and questions: How can we continue to evolve assessments in ways that encourage autonomy, relevance, and genuine intellectual engagement? And how might we use AI not just to streamline tasks but to expand the boundaries of what’s possible in student learning?
For those who haven’t yet seen the webinar, I highly recommend catching the replay. The ideas discussed offer a fresh perspective on the future of assessments and AI in education—a perspective that could inspire us all to rethink the way we engage students in an AI-rich world.
About the Author
Phil Denman serves as Coordinator of the Quality Scorecard Suite for the Online Learning Consortium where he leads quality assurance programs related to online, blended, and digital learning. He brings over 20 years of experience to OLC, including 15 years spent as an instructional designer, initially helping launch UC Berkeley’s first two fully-online Master’s degree programs, then spending 10 years at San Diego State University serving as Campus Coordinator for the Course Redesign with Technology initiative to implement high impact practices to improve student success, as well the Faculty Lead for Quality Assurance where he developed programs to evaluate and improve online and blended learning. He holds a M.S. in Education from California State University, East Bay and a B.A. in Communication Arts from Allegheny College.