When Jennifer Martinez told her students that their final essay would be written in class, by hand, the groans were loud enough to hear from the hallway. Two years ago, she wouldn't have considered it. Today, it's one of several strategies she's using to navigate teaching in the age of AI — and she's not alone.
Across the country, schools are grappling with AI-generated student work. Some have invested heavily in detection software. Others have redesigned their entire approach to assessment. Most are somewhere in between, trying to figure out what works while the ground shifts beneath them.
We talked to over thirty teachers, a dozen administrators, and scores of students to understand what's actually happening in schools — not what vendors claim or what administrators hope, but what the daily reality looks like.
The Current Landscape
How Many Schools Use Detection Tools?
The honest answer: we don't know precisely, and neither does anyone else. Turnitin claims over 16,000 institutions use their AI detection feature, but that includes every school with a Turnitin contract — many of which may not actively use the AI component.
What we can say from our reporting: detection tool usage varies dramatically by school type and region. Well-funded suburban districts often have Turnitin contracts. Urban districts are more mixed — some have invested, others can't afford to. Many individual teachers use free tools like GPTZero regardless of official school policy.
From our interviews:
- 78% of teachers reported having access to at least one AI detection tool
- Only 34% said they used detection tools regularly
- 62% expressed low or moderate confidence in detection tool accuracy
- 89% said they wished they had more training on AI and academic integrity
The Gap Between Policy and Practice
Many districts have AI policies on paper. Fewer have policies that teachers actually know about and understand. Even fewer have provided the training and resources teachers need to implement those policies.
"My district sent out a two-page memo about AI in September," one middle school teacher in Texas told us. "It basically said 'use your judgment.' That was the training. Meanwhile, I've got parents emailing me accusing their kids of cheating based on Turnitin scores they looked up themselves."
This gap — between policy documents and classroom reality — emerged as one of the central frustrations in our reporting.
What's Working
Turnitin's Institutional Rollout
For schools already using Turnitin for plagiarism detection, the AI detection feature offers seamless integration. Teachers don't need to learn new software or create new workflows. Students submit to the same place they always have.
That said, "working" here means "operationally functional" — not "solving the problem." Teachers who use Turnitin's AI detection regularly told us it's useful as a screening tool, flagging papers that might warrant a conversation. They emphasized: a conversation, not an accusation.
"I treat a high AI score the same way I'd treat a paper that seemed way better than a student's usual work," said a high school English teacher in California. "It's a signal to pay attention, not a verdict."
Schools That Combined Detection with Redesigned Assignments
The schools reporting the most success aren't relying on detection alone. They've paired detection tools with significant changes to how they assess student learning.
Common patterns in these schools:
- Process documentation: Requiring students to submit notes, outlines, and drafts alongside final papers
- In-class components: Some portion of major assignments completed during class time
- Oral components: Students discuss their work in brief one-on-one conversations with teachers
- Personalized prompts: Assignments that require specific personal reflection or reference to class discussions
"The detection tool catches some things," one department chair told us. "But the real change was asking students to show their work — literally. When they know they'll need to explain their thinking, most of them just... do their own thinking."
The Oral Defense Model
Perhaps the most AI-resistant approach we encountered: brief oral defenses of written work.
The format varies. Some teachers do 5-minute conversations after major papers. Others conduct quick verbal check-ins during class while students work. A few schools have implemented formal defense requirements for senior projects.
"It's old school," said a community college instructor who's been using oral exams since 2023. "My philosophy professor made us do it in the 90s. Turns out it's also AI-proof. A student who used ChatGPT for their essay can't explain why they made specific argumentative choices."
The downside: oral assessments take time. A lot of it. This approach works best with smaller class sizes or when reserved for high-stakes assignments.
What's Not Working
Detection-Only Approaches
Schools that purchased detection software and changed nothing else are, by most accounts, losing the arms race.
"We got Turnitin AI detection at the beginning of the year. By October, kids were sharing workarounds in group chats," reported a New Jersey teacher. "Paraphrase your ChatGPT output. Use a different AI to rewrite it. Add intentional grammar mistakes. The tools just aren't keeping up."
Detection tools work best as part of a broader strategy. Alone, they create false confidence and miss sophisticated AI use.
The False Accusation Problem
Every teacher we spoke with who uses detection tools had at least one story of a false positive — a student flagged for AI use who clearly wrote their own work.
"My best writer got flagged at 86%. She was devastated. I knew she wrote it — I'd seen her drafts, she'd asked me questions during the process. But I still had to have a conversation with her that basically said 'I know you didn't cheat, but the computer thinks you did.'"
— High school English teacher, Ohio
False positives disproportionately affect certain students: ESL writers, students using accessibility tools, and students who write in formal, structured styles. See our reporting on equity issues in AI detection.
The Arms Race: Students vs. Detectors
Students who want to use AI without getting caught have many options. They can use AI to generate ideas and write in their own voice. They can run AI output through paraphrasing tools. They can use AI models that detection tools haven't been trained on.
"The kids who are going to cheat have already figured out how," one teacher said bluntly. "The detection tools mostly catch the lazy cheaters and the students who didn't cheat at all."
This arms race dynamic is why many educators are shifting focus from detection to assignment design and building cultures of integrity.
What Teachers Wish Administrators Knew
We asked teachers: if you could tell your administrators one thing about AI detection, what would it be? The answers clustered around a few themes:
"Stop treating detection scores like breathalyzer results."
A percentage isn't proof. It's a starting point for conversation.
"We need training, not tools."
Software without professional development is just a budget line.
"The real solution is assignment redesign."
But redesigning assignments takes time teachers don't have.
"False positives damage student relationships."
An accusation is traumatic even when it's wrong. Sometimes especially when it's wrong.
"This isn't just a tech problem."
It's a conversation about what education is for in the AI age.
A Better Framework
Based on our reporting, schools navigating AI most effectively share several characteristics:
They treat detection as one tool among many. Not a solution, but a signal. High-functioning schools use detection to inform conversations, not conclude them.
They invest in teacher training. Not one-off PD sessions, but ongoing support and time for teachers to redesign assessments and share what's working.
They're transparent with students. The best outcomes came from schools that talked openly with students about AI — its uses, its limitations, and why authentic learning still matters.
They protect students from false accusations. Clear processes ensure that detection tool results never become accusations without investigation.
They keep asking harder questions. Not just "how do we catch cheaters?" but "what do we want students to learn, and how do we know if they learned it?"
The AI detection challenge isn't going away. Tools will get better. So will the AI. The schools that thrive will be the ones that stop looking for technological solutions to what is fundamentally a human question: what does learning mean when machines can produce credible work?
That's a question worth working through together.
