This essay reflects one teacher's evolving perspective on AI detection. Working Educators publishes diverse viewpoints — this is one voice in an ongoing conversation.
When my district announced we'd be using Turnitin's AI detection feature, I was furious.
I'd read the studies about false positives. I'd seen the reports about ESL students being disproportionately flagged. I thought about my own students — many of them English language learners, many from families where no one had gone to college — and I imagined them accused of cheating for writing that was simply different from what the algorithm expected.
I wrote an angry email to my department chair. I signed a petition organized by other teachers. I was certain that AI detection was surveillance dressed up as academic integrity, and I wanted no part of it.
Then the semester started.
The Reality Check
Within the first month, I caught three papers that were clearly AI-generated. Not flagged by software — I hadn't even run them through detection yet. I knew because I know my students. These papers had no voice. No rough edges. No evidence of the struggle I'd watched them work through in class.
One paper discussed symbolism in The Great Gatsby with perfect structure and zero personality. My student had told me the week before that she hadn't finished the book.
When I talked to her, she didn't deny it. "I ran out of time," she said. "I had three other things due. ChatGPT was faster."
I wasn't angry at her. I was overwhelmed by the same realization she was living: the system we're in creates pressures that push students toward shortcuts. But I also couldn't pretend the shortcut wasn't happening.
The Middle Path
I started using detection — not as a gotcha, but as one data point among many. Here's how my thinking evolved:
I stopped treating detection scores as verdicts. A high AI probability flag means I need to have a conversation, not that I've proven guilt. I look at the student's previous work, their participation in class, their ability to discuss the paper's ideas. The number is a starting point, not an ending.
I started being more transparent about the tools. I tell my students exactly what I'm using, how it works, and what its limitations are. If they're going to be subject to algorithmic judgment, they deserve to understand the algorithm.
I redesigned my assignments. Process documentation, in-class writing, oral conferences — I built in multiple checkpoints that would be hard to fake with AI alone. The detection tool became less necessary because the assignment itself required authentic engagement.
What Changed My Mind
My resistance to AI detection was rooted in real concerns. Those concerns haven't disappeared. False positives are real. Bias against certain writing styles is real. The surveillance dynamic is uncomfortable.
But here's what I underestimated: the cost of doing nothing.
When AI-generated work goes unaddressed, the students doing honest work feel cheated. The students using AI don't develop the skills they need. The implicit message is that we don't actually care about authentic learning — we just care about completed assignments.
A student told me she was relieved when I started checking. "It felt unfair before," she said. "Like, why am I spending hours on this when other people aren't?"
What I Still Worry About
I haven't become a detection evangelist. I still worry about:
- Students who are flagged incorrectly and don't have the language or confidence to defend themselves
- The cumulative psychological effect of being constantly monitored
- Teachers who use detection as a substitute for knowing their students
- The way detection tools can create adversarial relationships instead of learning partnerships
These are real risks. But I've concluded that the answer isn't to abandon detection — it's to use it thoughtfully, as part of a larger approach that includes redesigned assignments, transparent policies, and genuine relationships with students.
The Nuanced Position
I came around to AI detection, but not in the way the vendors would like. I don't think it's a solution. I think it's a tool — imperfect, sometimes problematic, but useful when combined with human judgment and pedagogical intention.
The teachers I respect most are holding both truths: AI detection has real problems AND AI-assisted cheating is a real concern. The answer isn't to pick a side. It's to navigate the complexity.
That's harder than being outraged. But it's the job.
This essay represents one teacher's perspective. Have a different experience? Share your story.