✓ One-time payment no subscription7 Packages · 38 Courses · 146 LessonsReal-world safety, wellbeing, and life skills educationFamily progress tracking included🔒 Secure checkout via Stripe✓ One-time payment no subscription7 Packages · 38 Courses · 146 LessonsReal-world safety, wellbeing, and life skills educationFamily progress tracking included🔒 Secure checkout via Stripe
Home/Blog/Digital Safety
Digital Safety9 min read · April 2026

Deepfakes and Synthetic Media: What Young People Need to Understand

Deepfake technology is now accessible to anyone with a smartphone, and teenagers are increasingly both creators and targets. This guide explains what deepfakes are, the serious harms they can cause, and how young people can protect themselves.

What Are Deepfakes?

Deepfakes are synthetic media created using artificial intelligence: images, audio, or video that realistically depict a person saying or doing something they never actually said or did. The term comes from a combination of deep learning, the AI technique used to create them, and fake. The technology has developed rapidly over recent years, moving from high-budget productions requiring specialist knowledge to tools that can be operated by anyone with a smartphone and a few publicly available images of their subject.

Not all synthetic or AI-generated media is harmful. AI-generated images and videos have legitimate creative, educational, and commercial uses. The concern for young people relates to the specific use of this technology to create non-consensual intimate images, harassment content, defamatory material, and fraud targeting individuals. These harms are growing in scale and seriousness as the technology becomes more accessible.

How Deepfakes Are Used to Harm Young People

Non-Consensual Intimate Deepfakes

The most serious form of deepfake harm affecting teenagers involves the creation of sexualised or intimate images that appear to depict a real person who never consented to, or participated in, the creation of such content. Using publicly available images from social media, someone's face can now be superimposed onto intimate imagery using freely available software, creating realistic-appearing images that can be used for harassment, blackmail, or simple cruelty.

The Internet Watch Foundation and equivalent organisations have documented significant and growing volumes of AI-generated child sexual abuse material, and law enforcement agencies worldwide are grappling with the challenge of addressing this category of content. Beyond child sexual abuse material, non-consensual intimate deepfakes are increasingly used to target teenagers and young adults in ways that cause profound psychological harm regardless of their legal status in a given jurisdiction.

Victims of non-consensual intimate deepfakes experience similar harms to those of real intimate image abuse: intense distress, anxiety, shame, withdrawal from social life, and in some cases lasting psychological trauma. The fact that the image is synthetic rather than genuine does not diminish the harm. For a young person, knowing that images exist that purport to show them in intimate contexts, and that those images may be circulating among their peers, is an acute and serious experience regardless of their technical origin.

Harassment and Defamation

Deepfakes can be used to put words in a person's mouth, to fabricate evidence of behaviour that never occurred, or to create scenarios designed to humiliate or harm. Teenagers have been targeted by deepfakes designed to damage their reputation at school, to create false impressions in the minds of teachers, employers, or family members, or simply to cause maximum distress. The viral potential of realistic-appearing video makes this a particularly powerful tool for harassment.

Fraud and Impersonation

Voice cloning and video deepfake technology are used in financial fraud, with reported cases including teenagers being impersonated to convince family members to transfer money, and fraudulent video calls using a young person's likeness to deceive others. As the technology improves, the ability to detect these impersonations in real time becomes more difficult.

The Legal Landscape

Legal protections against non-consensual intimate deepfakes are developing rapidly in many countries, though they remain inconsistent globally. In the UK, the Online Safety Act and subsequent amendments have created or strengthened criminal offences related to non-consensual intimate image sharing, including AI-generated content. In the United States, multiple states have enacted legislation specifically addressing deepfake intimate imagery, and federal legislation is under active development. Australia, Canada, and several European countries have enacted or are developing equivalent protections.

From HomeSafe Education
Learn more in our Nest Breaking course — Young Adults 16–25

Even where specific legislation exists, enforcement is challenging because perpetrators are often anonymous, content spreads quickly across platforms, and the technical and legal complexity of these cases requires specialist expertise. However, the legal framework is developing in the right direction, and reporting incidents to law enforcement and to platforms is both appropriate and can result in action.

Protecting Yourself from Deepfake Misuse

Complete protection against being the subject of a deepfake is not achievable for anyone with a public online presence. However, several measures reduce risk and limit the material available to potential creators of harmful content.

Being thoughtful about what images and videos you make publicly available is the most significant factor. Deepfake technology is most effective when it has many source images to work from. Extensive public photo galleries, particularly those showing a range of angles and expressions, provide better source material than limited, private image collections. This does not mean never posting images online, but it is a reason to consider carefully what is posted publicly versus shared only with close contacts.

Privacy settings on social media that restrict who can see and download your images limit the pool of people who can use them as source material. Settings that prevent non-contacts from screenshotting or downloading images, where these are available, provide an additional layer of protection.

Watermarking images before posting publicly can deter some forms of misuse, though it is not effective against determined or technically sophisticated perpetrators.

What to Do If You Are Targeted

If you discover that deepfake content has been created using your likeness, the response is similar to other forms of non-consensual image abuse. Document everything by screenshotting the content, the account names, and any communications related to the content. Report the content to the platform using their specific reporting mechanisms, which on major platforms include options for non-consensual intimate imagery.

Tell a trusted adult. The shame and distress associated with discovering this type of content can make it feel impossible to disclose, but having adult support, and potentially professional help, makes a significant difference to how the situation is managed and how you recover from it.

Contact a specialist organisation. Groups including the Cyber Civil Rights Initiative and the Internet Watch Foundation provide support and can assist with content removal. In some cases, specialist legal advice may be appropriate, particularly if the content is being used for blackmail or if the perpetrator is identifiable.

Report to law enforcement. In countries where non-consensual intimate deepfakes are illegal, this is a crime and should be reported as such. Even where the specific legal framework is less developed, reporting creates a record and contributes to the broader law enforcement picture.

Media Literacy in the Age of Synthetic Content

Beyond personal protection, the proliferation of synthetic media has broader implications for how young people understand and engage with digital content. The assumption that seeing is believing, which underpinned the evidential status of photography and video, is no longer reliable. Teaching young people to approach visual and audio content with appropriate scepticism, to check sources before drawing conclusions from compelling media, and to use emerging detection tools as part of their information evaluation toolkit, is an important media literacy objective.

Several tools and organisations are working on deepfake detection technology. While no detection method is currently fully reliable, these tools are improving, and the habit of verifying surprising or significant media before treating it as authentic is increasingly important. Just as lateral reading and fact-checking became standard parts of evaluating written information, critical evaluation of visual media is becoming an essential skill for navigating a world in which what you see cannot always be trusted as genuine.

More on this topic

`n