✓ One-time payment no subscription7 Packages · 38 Courses · 146 LessonsReal-world safety, wellbeing, and life skills educationFamily progress tracking included🔒 Secure checkout via Stripe✓ One-time payment no subscription7 Packages · 38 Courses · 146 LessonsReal-world safety, wellbeing, and life skills educationFamily progress tracking included🔒 Secure checkout via Stripe
Home/Blog/Digital Safety
Digital Safety9 min read · April 2026

AI and Deepfakes: What Every Teenager Needs to Know About Synthetic Media

Artificial intelligence can now create convincing fake videos, voices, and images of real people. This guide explains what deepfakes are, how to spot them, why they matter for teenagers specifically, and what to do if you become a target.

What Are Deepfakes?

Deepfakes are synthetic media, most commonly videos, images, or audio recordings, created using artificial intelligence to make it appear that someone said or did something they did not. The term combines deep learning (the AI technique used) and fake. The technology has advanced rapidly: what required expensive specialist expertise a few years ago can now be produced using free or low-cost apps on a smartphone.

Deepfakes exist on a spectrum. At one end are clearly labelled creative works, parody content, or entertainment applications. At the other are malicious creations designed to deceive: non-consensual intimate images, fake statements attributed to public figures, manipulated evidence, and targeted harassment content. It is the malicious end of this spectrum that concerns families and young people most directly.

How Deepfakes Are Made

Understanding how deepfakes are created helps in understanding the risks. Modern deepfake tools use machine learning models trained on images or videos of a person to generate new synthetic content featuring their likeness. The more images or video of a person available, the more convincing the result.

For teenagers who post regularly on social media, this is a significant concern: every photograph or video posted publicly is potential training data for someone wishing to create synthetic content of them. This does not mean teenagers should never post online, but it does mean that awareness of this risk is part of informed decision-making about what to share publicly.

Voice cloning technology has advanced in parallel with visual deepfakes. Using only a short sample of a person's voice (sometimes only seconds), AI tools can now synthesise new speech in that voice saying things the person never said. This has been used in voice scams targeting families, where a cloned voice of a family member claims to be in trouble and requests money.

Specific Risks for Teenagers

Non-consensual intimate deepfakes: The creation of fake intimate or sexual images of real people using AI is among the most serious risks for teenagers. These images can be created from ordinary photographs available on social media. They have been used for harassment, blackmail, and humiliation. Several countries have enacted or are enacting specific legislation making the creation and distribution of non-consensual intimate deepfakes a criminal offence.

Deepfake-based sextortion: Scammers create fake intimate images of a target (sometimes from nothing more than a profile photo) and then contact the target threatening to share the images unless payment is made. This is extortion, the images are fake, and the appropriate response is never to pay.

Reputation attacks: Deepfakes can be used to create false evidence of a teenager saying or doing something damaging. Fake videos or audio attributed to a teenager within school or peer group contexts can cause serious social harm.

Manipulation and fraud: Voice cloning technology has been used to create fake calls from children to their parents claiming to be in trouble and urgently needing money. Families should establish a simple code word or verification protocol for exactly this situation.

How to Spot a Deepfake

As the technology improves, deepfakes become harder to detect, but several telltale signs still appear in many synthetic media:

Visual deepfake indicators:

From HomeSafe Education
Learn more in our Street Smart course — Teenagers 12–17
  • Unnatural blinking patterns: either too infrequent or irregular
  • Inconsistent lighting or shadows across the face versus the background
  • Blurring or distortion around the edges of the face, particularly the hairline and ears
  • Inconsistent skin texture: the face appearing smoother or different in quality from the neck and body
  • Unnatural eye movement or fixed gaze
  • Mouth movements that do not perfectly synchronise with the audio
  • Inconsistent or flickering background elements

Audio deepfake indicators:

  • Slightly robotic or unnatural cadence and intonation
  • Absence of the natural fillers, hesitations, and breathing patterns of real speech
  • Inconsistency with how the person actually speaks

Importantly, these visual tells are becoming less reliable as technology improves. High-quality deepfakes produced with sufficient resources can now be very difficult to detect by eye. Approaching unexpected or surprising content with scepticism, regardless of how convincing it appears, is more reliable than attempting visual detection alone.

Deepfake Detection Tools

Several organisations have developed AI-based deepfake detection tools:

  • Microsoft's Video Authenticator analyses media for signs of AI manipulation
  • Deepware Scanner is a free tool specifically for video deepfake detection
  • Intel's FakeCatcher claims high accuracy in detecting deepfakes in real time

These tools are useful but not infallible. They are most effective for lower-quality deepfakes and less reliable against the most sophisticated current models. Treat detection tool results as one data point rather than definitive evidence.

Critical Thinking About Media

The most robust protection against deepfakes is developing the habit of critical thinking about any surprising, outrageous, or emotionally activating media content:

  • Is this from a credible, verifiable source?
  • Does this seem consistent with what I know about this person?
  • What would be the motive for creating this?
  • Can I find corroboration from other independent sources?
  • Is my emotional reaction being used to bypass careful thinking?

These questions apply not only to potential deepfakes but to all media, and developing them as habits is one of the most valuable digital literacy skills a young person can have.

What to Do If You Are Targeted

If deepfake content has been created and shared of you without consent:

  • Document the content (screenshots, links) before it is removed
  • Report to the platform immediately using the most specific available category
  • In the case of intimate deepfakes, contact organisations like StopNCII.org and the Revenge Porn Helpline (UK) or equivalent national services
  • Tell a trusted adult and consider reporting to police, as creation and distribution of intimate deepfakes is increasingly criminalised
  • Do not pay any blackmail demand

Conclusion

Synthetic media and deepfakes represent one of the most significant emerging challenges in digital safety for young people. The technology is advancing faster than regulation and detection tools can keep pace. The best protection combines media literacy (scepticism, verification habits, understanding of how deepfakes are made) with practical knowledge of how to respond if targeted. Every teenager who understands this landscape is better equipped for the digital world they actually inhabit.

More on this topic

`n