A warmly-lit photo of a college-age girl writing in a notebook at a desk in front of a laptop.

A healthy and progressive dialogue about technology, particularly about AI in education, should be based on the best facts available. It isn’t easy to stay current considering how quickly the technology landscape changes. When bad actors come into the mix, it becomes impossible.

Bad Actors Sell Confounding Products, and a Solution That Doesn’t Work

There are particularly egregious practices occurring by a class of AI detection companies trying to create the need and then sell a solution that doesn’t work. These bad actors sell paraphrasers and humanizers alongside their detection services, promising to bypass AI detection systems, which very conveniently can be checked on their own site by their own detector. When at first the unwitting soul’s piece of prose fails, they drop their money to run it again and again while the bypass/detection scheme profits.

Research From 2023 No Longer Holds Water

As if cons aren’t bad enough, add in the problem of arcane language and misinformation and it is easy to see why it’s difficult to understand the product class. The same language being used to describe detection has not evolved since first reaching the public eye in 2023 — a millennium in AI years. Likewise, our interpretation of where AI detection fits in instruction is also years behind.

Here are some examples. In just the past two weeks, I’ve read statements from one prominent AI detection company that were stated as fact even though the information the author referred to were from March 2023. Likewise, I’ve read opinion submissions from two different people that referred to statements from colleges and universities that were made in April, 2023.

I’ve also seen more people than I can count citing studies about AI detection that are over two years old, originally published in May and June 2023. Few, if any, of the ideas, assumptions, data, or conclusions from that nascent stage of development are true anymore because detection technology has evolved nearly as quickly as GenAI itself.

One of the most oft-repeated examples of outdated information is that AI detection is unreliable or inaccurate. Three years ago, there may have been some truth to that, but no longer. Technical briefs and news articles about the earliest detection tools have been replaced many times over by third-party research from academic institutions showing that quality, engineered AI transparency and detection tools are highly accurate. Not only are they more accurate overall, they are robustly resistant to AI-powered editing, language and translation spinning, paraphrasing humanizers, and other forms of intentional deception.
Despite research showing widespread improvement in detection accuracy there are multiple AI detection systems that are truly terrible that keep getting mixed in among the legitimate tools. Even worse is knowing that many of those terrible detectors are terrible quite possibly by design.

The makers of the terrible AI detection tools never actually intended to support integrity in academia or in other use cases like content moderation. Terrible AI detection tools were created to further their own agendas. Not only are they created to deceive, but their marketing and publicity efforts use deception and obfuscation to lure buyers who are vulnerable and easily misled.

Defining a New Category as AI Transparency Tools

One way to clear up the confusion is to change what we call “AI detection.” The field has advanced so far now that calling it “detection” does not do it justice. During the past three years of advancements, some of these tools can now spot individual portions of text that have been slightly modified by AI. It is possible to see the transitions from human to GenAI and to tell when AI was used for editing and clarity. In truth, AI detection is now AI transparency technology.

AI transparency conveys a productive, give-and-take interaction between educators and their students. It reflects the ability to analyze text and examine the exercise of writing. This also happens to be the kind of interaction that allows freedom to experiment and be honest about it.

Use Research That Keeps Up With the Speed of Advancements

The other important change is to rely on research that is no more than 12 months old. We cannot afford for our conversations and evaluations to look too far back, or if they do, old research must be balanced with the most current and reputable studies available. A company’s own technical brief may be of merit, but better still are third-party research studies and articles by respected journalists.

Tools Inform. People Make Decisions.

Finally, we need to shed the idea that a technology tool flags a student for wrongdoing when its sole purpose is to mark a portion of text for additional attention. Tools do not make decisions. The information they generate should inform a decision, as small bits contributing to the whole. Students need to see that experimenting and stretching into new territories with AI is discoverable and their instructors and teachers will follow up, first to guide conversations but also to inform if the student followed the rules of the assignment. When it comes to giving every student a level playing field, AI transparency tools not only are part of instruction, they are incredibly accurate for evaluative purposes, helping faculty to establish guardrails so they can confidently recognize when boundaries have been crossed.

Max Spero is an AI and Machine Learning expert, having studied AI at Stanford and worked as an ML Engineer at Google and Nuro before founding Pangram with co-founder, Bradley Emi.

Read More from OLC Insights

November 16-19, 2026 in Orlando, Florida

Registration and Call for Proposals are Now Open!

OLC Accelerate is the premiere online learning conference showcasing groundbreaking research and highly effective practices in online and digital learning across K-12, higher education, and corporate L&D. 

August 3-5, 2026 in Fort Worth, Texas and 
August 6-7, 2026 in Rockford, Illinois

Registration and Call for Proposals are Now Open!

OLC Elevate is a bold new initiative from OLC that brings high-impact digital learning discussions, hands-on workshops, and thought leadership directly to your regional community. 

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. More info