Birds-eye photo of a group of students working around a small table, with books and papers scattered between them.

In higher education, academic integrity policies dominate conversations about students’ clandestine use of generative AI. This is understandable and appropriate. Misuse is cheating and rules matter. But for asynchronous online courses, the real crisis is upstream of academic integrity policies—it is the interpersonal trust those policies are meant to protect. When students suspect that the peers responding to their discussion posts may actually be chatbots, the social foundation of collaborative online learning weakens. And neither policies nor pedagogy alone can fully repair it.

Asynchronous peer-to-peer interactions have long been a core feature of online learning. Two decades of research shows that activities like discussion forums reduce isolation, deepen engagement, and widen the locus of expertise beyond the instructor, fostering community-focused learning.

But these benefits depend on trust. Students need to believe their classmates are present, authentic, and invested in order to feel that they are not alone. In asynchronous courses, that belief rests on discourse cues—specific references to course material, personal anecdotes, colloquial turns of phrase, etc.—and on institutional guarantees that the person behind the keyboard is who they claim to be.

Generative AI disrupts those signals. Since 2022, tools like ChatGPT have produced text that can arguably pass for human. And with surveys showing that more than a quarter of students have used AI to produce assignments on their behalf, the possibility of machine ghostwritten posts is not hypothetical. Even if no one in a given discussion forum tries to pass off chatbot-generated content as their own, the mere suspicion can still be corrosive to both the social dynamic of discussions and the amount of effort students exert in constructing their posts.

Rules Are Not Enough

The instinctive response to AI misuse—to tighten policies—can backfire. A blanket ban without strong mechanisms for enforcement invites students to further conceal their AI use, not to comply with the rules.

Disclosure requirements like citation schemes are well intended. However, they constitute a ritual performance of credibility and not a substantive, verifiable demonstration. Students are left to wonder whether peers’ acknowledgments are honest, and whether a lack of acknowledgment amounts to a tacit admission of cheating.

Finally, enforcement mechanisms like AI detection technology create problems of their own. AI detectors are prone to false positives that disproportionately flag multilingual writers and those whose work is too formal and polished, raising concerns about equity. Further, like heavy-handed rules, they can signal instructor distrust, which may seep into students’ interactions with one another.

Policy, in other words, has its place. But it must focus on building community norms rather than too zealously policing students’ actions.

A Multipart Approach

What is needed instead is a multipart solution that weaves policy together with pedagogical and relational approaches to building interpersonal trust. An effective strategy might:

Set Rules that Clarify Rather than Accuse

Help students help each other by setting boundaries around AI use. For example, you might allow AI for brainstorming but prohibit it for authoring discussion posts, then explain why the distinction matters for learning.

You might extend disclosure requirements to all submissions (“I used AI” / “I did not”), so that acknowledgments do not become a tell that triggers suspicion. Then make it easy for students to comply by making the acknowledgment a sentence, not a long appendix.

Making the right course of action the easy course—making it easy to buy in—can transform abstract AI policies into social norms that in turn contribute to a trusted community.

Implement AI-Resilient Pedagogy

Design interactions that make human engagement visible. You might pair a written post with a short video reflection, or with an asynchronous screen-share that asks students to walk through their thought processes. And you might incorporate discussion prompts that explicitly connect course concepts to personal experience—something that AI struggles to fake convincingly.

For example, ask students to outline a process they have experienced and identify pain points that might be addressed using methods from their coursework. This promotes the kind of self-disclosure that enhances credibility in peer-to-peer interactions, while also posing an obstacle to convincing AI-generated writing. Or ask students to record video of themselves delivering an elevator pitch. This helps students literally see each other, while presenting a task AI cannot currently replicate without extensive intervention and refinement from the user.

These strategies are not foolproof. And they can be enhanced further by orienting students to effective AI use in educational settings, either as a component of the course or at the program level, early on. An interactive AI orientation like the one outlined here has the advantage of promoting equity by closing gaps in AI literacy among students. And it further promotes the establishment of social norms—and therefore interpersonal trust—by giving students some agency in defining what appropriate use means for their learning community.

Enhance Learning by Focusing on Relationships

If the suspicion that classmates are not who they say they are can erode trust, then mechanisms that foster stronger relationships can bolster it. Online courses must go beyond the obligatory “introduce yourself” forum by building community into the regular rhythm of the work.

Part of this can be accomplished through the asynchronous pedagogical techniques outlined above. For example, creating discussion prompts with a personal reflection element.

But in addition, consider requiring at least one synchronous touchpoint: a class meeting, a brief instructor check-in, a collaborative brainstorming session ahead of the final project, or something else. These touchpoints can be adapted to larger courses or classes with students in multiple time zones. They do not have to be one-on-one with the instructor or have every student present. The point is that seeing faces and hearing voices reassures students that their classmates are authentically engaged.

Finally, consider integrating small-group projects that include a synchronous planning meeting. The more deeply students are able to get to know each other, the more they will be able to view their classmates as credible and authentic collaborators. And the more incentive they will have to act like credible partners themselves.

Next Steps for Strengthening Trust

Retooling to prioritize interpersonal trust need not be onerous. It can start with simple, adaptable steps.

For instructors:

  • Review course materials and identify opportunities for students to let their personalities, and their personal experiences, show.
  • Build in a few low-lift tasks—like video-recorded reflections—that help students see the human presence behind their classmates’ keyboards.
  • Require at least one synchronous touchpoint to help students get to know each other—and to build the type of rapport that will foster mutual accountability.

For administrators:

  • Work with students and instructors to design and deploy AI literacy programming that helps participants engage with the technology in a way that is pedagogically appropriate.
  • Support and celebrate pedagogical innovation by connecting instructors who do great work on building trusting relationships, and by highlighting their work to the broader faculty community.
  • Create a curated repository of practices that prove effective at fostering peer-to-peer engagement; there is no reason why every instructor should have to reinvent the wheel as they prepare to teach an online course.

Academic dishonesty did not begin with generative AI. But the specter of AI that lingers in the background of every classroom casts a pall of uncertainty that policies alone cannot dispel. In asynchronous online courses—where students may never meet face to face—that uncertainty poses an existential problem, chipping away at the foundation of trust that makes asynchronous interactions work.

By moving past rules alone, and by deliberately weaving presence and relationship-building into online pedagogy, we can help students establish credibility with one another. And in doing so, we do more than set boundaries around AI. We create a more connected and effective online environment in which to learn.

Adam D. Zolkover is Associate Director for Curriculum Design and Online Education for the Master of Health Care Innovation and related programs in the University of Pennsylvania’s Perelman School of Medicine. He works with faculty to develop courses that translate their expertise into effective and engaging courses that facilitate learning and community. Adam has previously taught folklore, literature, and public speaking at universities in Philadelphia and has served as the online editor for the Institute for Civility in Government’s Civility Blog. He holds a BA in History from the University of California, Berkeley and a Master’s in Folklore from Indiana University.

Read More from OLC Insights

A photo of a woman wearing headphones writing in front of her laptop in a bright and airy office space with the OLC Innovate logo overlaid on top.

Virtual | March 3-5, 2026

OLC Innovate provides a path for innovators of all experience levels and backgrounds to share best practices, test new ideas, and collaborate on driving forward online, digital, and blended learning. Join us as we challenge our teaching and learning paradigms, reimagine the learning experience, and ideate on how disruptions in education today will shape the innovative classroom of tomorrow.

 

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. More info