✓ One-time payment no subscription7 Packages · 38 Courses · 146 LessonsReal-world safety, wellbeing, and life skills educationFamily progress tracking included🔒 Secure checkout via Stripe✓ One-time payment no subscription7 Packages · 38 Courses · 146 LessonsReal-world safety, wellbeing, and life skills educationFamily progress tracking included🔒 Secure checkout via Stripe
Home/Blog/Digital Safety
Digital Safety8 min read · April 2026

Deepfakes and AI-Generated Content: What Every Young Adult Needs to Know

Artificial intelligence can now create convincing images, videos, and audio of real people saying and doing things they never did. Understanding how deepfakes work and how to protect yourself is essential knowledge for young adults navigating digital life.

Why This Matters Now

Until recently, creating a convincing fake video or image of a real person required professional skills, significant resources, and considerable time. That is no longer true. Freely available artificial intelligence tools can now produce realistic-looking images, videos, and audio of real people within minutes, using publicly available photographs or recordings as source material. The technology is advancing faster than either public awareness or the legal frameworks designed to govern it.

Young adults are particularly affected by this development for several reasons. They have more digital footprints than older generations: more photographs publicly available online, more video content, more audio recordings. They navigate relationships and social situations where digital content about them is created and shared constantly. And the specific harms that deepfake technology enables, including non-consensual intimate imagery and reputational damage, are ones that disproportionately affect people in the 16 to 25 age range.

This guide is not designed to create paranoia. Most digital interactions are harmless. But understanding what is possible, what your rights are, and how to protect yourself gives you agency in a rapidly changing digital landscape.

What Deepfakes Actually Are

The term deepfake comes from deep learning, the type of artificial intelligence used to create them. It covers a range of AI-generated content in which real people appear to do or say things they never actually did or said. This includes face-swapped videos in which one person's face is placed onto another's body, voice-cloned audio in which someone's voice is used to deliver words they never spoke, and AI-generated still images that realistically depict a person in a scenario that never occurred.

The quality of this content varies significantly. Some deepfakes are detectable to a careful observer. Others are not, particularly as the technology continues to improve. The concerning category is the one in the middle: content that is good enough to deceive people who are not looking critically, including people who know the subject.

The Risks to Young Adults

Non-consensual intimate imagery (NCII): This is the most serious and most common harmful use of deepfake technology against young adults. AI tools can create sexually explicit images or videos of a real person without their consent, using only publicly available photographs of their face. This is sometimes called deepfake pornography. In England and Wales, sharing this content became a criminal offence under the Online Safety Act 2023, and creating it became a criminal offence in 2024. The emotional and psychological impact on victims is severe and often long-lasting. If you become aware that such content exists featuring you, you can report it to the Stop NCII tool (stopncii.org), which works with major platforms to detect and remove it without you needing to see the content yourself.

Reputational damage: Deepfake videos that appear to show someone saying or doing something offensive or embarrassing can cause serious reputational harm even after they are identified as fake. The retraction rarely travels as far as the original content. Being aware that such content can be created is not paranoia; it is context for managing your digital presence thoughtfully.

From HomeSafe Education
Learn more in our Nest Breaking course — Young Adults 16–25

Romance scams and manipulation: Deepfake video call technology is increasingly used in online relationship fraud, allowing scammers to appear on video as a fictional or stolen identity. If you are building a relationship with someone you have never met in person, video calls provide some verification, but they are no longer fully reliable. Meeting in person, or at minimum video calling on platforms with built-in security features, adds additional verification.

How to Protect Your Digital Presence

You cannot fully prevent someone from creating a deepfake using your publicly available images or recordings. What you can do is reduce the available material and make your genuine content harder to use as source material without your knowledge.

Setting social media accounts to private limits who can access your photographs and videos. Being thoughtful about which images you share publicly, including considering whether full-face photographs in high resolution are necessary for the specific purpose, is worth doing without becoming anxious about it. Checking your name in image search engines periodically can reveal whether images of you are being used in ways you were not aware of.

Watermarking your own photographs, even subtly, makes them harder to use without detection. Using platforms that have explicit policies against NCII and active enforcement of those policies provides some additional protection, though no platform is foolproof.

What to Do If You Are a Victim

If you discover that deepfake content featuring you has been created or shared without your consent, the first response is not to share or engage with the content yourself, as this can expand its reach. Screenshot and document the evidence (the URL, the platform, the account that posted it) before anything is removed. Report it to the platform using their content removal process, citing non-consensual intimate imagery if relevant, which triggers different and faster removal processes on most major platforms.

Report to the police if the content is sexual in nature; this is now a criminal offence in the UK and the police have teams trained to handle it. Contact the Revenge Porn Helpline (revengepornhelpline.org.uk or 0345 6000 459) for specialist support and advice. Speak to someone you trust, because the psychological impact of this experience is significant and you should not try to deal with it alone.

Critical Media Literacy in the Age of AI

The broader implication of deepfake technology is that the relationship between digital content and reality is no longer reliable in the way it once appeared to be. Photographs and videos that appear to show a real event may not. Audio that sounds like a real person may not be. This does not mean that all digital content is suspect; it means that content making significant claims (showing a public figure doing something newsworthy, audio of someone saying something damaging) is worth treating with the same critical approach you would bring to any other claim: where did this come from, who created it, and what is the motivation for sharing it?

Developing this critical instinct, while avoiding the paralysis of believing nothing, is one of the most valuable digital skills of the current moment. It is a skill you can practise, share with others, and apply whenever something appears in your feeds that produces a strong emotional reaction, since that reaction is often exactly what misleading content is designed to produce.

More on this topic

`n