Teaching with AI

Teaching Civic Engagement in the Age of AI-Generated Misinformation

How educators are preparing students to navigate a world where seeing is no longer believing.

In 2020, Working Educators supported the Philly Youth Vote campaign with a simple mission: get every eligible high school senior to the polls. Today, that mission has grown more complex. It's not enough to register students to vote — we need to help them navigate a media landscape increasingly polluted by AI-generated misinformation.

Legacy Context

This page builds on the original voter registration advocacy that Working Educators supported in Philadelphia. The same commitment to civic engagement now extends to media literacy — helping students become informed citizens in an era of synthetic media.

The New Threat to Informed Citizenship

When we encouraged students to vote, we assumed they could access reliable information about candidates and issues. That assumption is now fragile. AI-generated deepfakes, synthetic audio, and fabricated "news" articles can spread faster than corrections can catch up.

The Challenge

  • Deepfake videos of candidates "saying" things they never said
  • AI-generated "news articles" from fake outlets
  • Synthetic audio clips spread via social media
  • AI-manipulated images presented as real events

The Response

  • Teaching source verification skills
  • Training students to spot AI-generated content
  • Building healthy skepticism without cynicism
  • Modeling fact-checking habits

What Teachers Are Doing

Across the country, social studies teachers, media specialists, and English teachers are adapting their curricula to address AI-generated misinformation. Here's what's working:

1. The SIFT Method

Many teachers have adopted Mike Caulfield's SIFT method (Stop, Investigate the source, Find better coverage, Trace claims), updated for the AI age. Students learn to pause before sharing, verify sources, and trace claims back to original reporting.

2. Deepfake Detection Exercises

Some teachers run exercises where students analyze videos and try to identify which are real and which are AI-generated. These exercises build pattern recognition for telltale signs: unnatural blinking, audio-visual sync issues, and contextual inconsistencies.

3. "News Autopsy" Assignments

Students take a viral claim and trace its journey: Where did it originate? How did it spread? What corrections were issued? What can we learn about how misinformation moves through networks?

Classroom Spotlight

At a Philadelphia high school, a social studies teacher runs "Fact-Check Fridays" where students bring in claims they've seen online during the week. Together, the class investigates — looking for original sources, checking multiple outlets, and using reverse image search to verify photos.

"The goal isn't to make them paranoid," the teacher explains. "It's to make verification a habit, like looking both ways before crossing the street."

Skills Students Need

Based on our conversations with teachers and media literacy experts, here are the core competencies students need to navigate AI-generated content:

Source Verification

Can I verify this comes from a real, credible source? Is this outlet known for accurate reporting?

Lateral Reading

What do other sources say about this claim? Do credible outlets corroborate it?

Visual Scrutiny

Does this image or video show signs of AI manipulation? Can I find the original source?

Emotional Awareness

Is this content designed to provoke strong emotions? Misinformation often exploits outrage and fear.

The Stakes for Democracy

This isn't just an academic exercise. When students can't distinguish real from fake, democracy suffers. Informed voting requires access to accurate information. If AI-generated misinformation erodes trust in all media, citizens disengage — and that's bad for everyone.

Working Educators believes media literacy is civic education. We're committed to supporting teachers who take on this vital work.

Resources for Teachers