Deepfakes and Teenagers: What Every Family Needs to Know About AI-Generated Fake Images
Deepfake technology is now accessible to anyone with a smartphone and a Wi-Fi connection. Teenagers are being targeted with fake intimate images created without their consent. This guide explains what deepfakes are, the harm they cause, and how families can respond.
What Are Deepfakes?
Deepfakes are synthetic media, most commonly images and videos, created using artificial intelligence to realistically depict people saying or doing things they never actually said or did. The term combines "deep learning" (the AI technique used) with "fake". While the technology has legitimate applications in film, advertising, and education, it has also become a serious tool for harm, particularly when directed at young people.
What makes deepfakes distinctively dangerous is their realism. Earlier forms of image manipulation were often detectable with close scrutiny. Modern AI tools can produce synthetic images indistinguishable from real photographs in seconds, using nothing more than a few existing photos of a person's face. This accessibility has transformed what was once a highly technical capability into something available to any teenager with a smartphone and internet access.
How Deepfakes Are Being Used to Harm Teenagers
The most serious and prevalent misuse of deepfake technology among teenagers involves the creation of non-consensual intimate images, sometimes called AI-generated CSAM (child sexual abuse material) when the subjects are under 18, or AI-generated non-consensual intimate images (NCII) for adults.
In documented cases across multiple countries, including the United States, Spain, the United Kingdom, Australia, and South Korea, young people, predominantly but not exclusively girls, have had their faces superimposed onto explicit imagery without their knowledge or consent. The images are then shared within peer groups, posted to public platforms, or used as leverage for extortion.
The harm caused to victims is profound. Research consistently shows that being targeted with deepfake intimate images causes severe anxiety, depression, social withdrawal, and in some cases suicidal ideation. The sense of violation is heightened by the knowledge that such images can spread rapidly and are extremely difficult to fully remove from the internet once distributed.
The Scale of the Problem
The proliferation of easy-to-use deepfake applications has made this problem significantly worse in recent years. In 2023 and 2024, multiple cases emerged in schools across the United States, UK, Spain, and elsewhere where students had created deepfake intimate images of classmates using widely available apps. In one notable case in Almendralejo, Spain, more than 20 teenage girls discovered deepfake intimate images of themselves had been created and shared by male classmates using a single app.
Research by organisations including the Stanford Internet Observatory and the UK's Internet Watch Foundation has found deepfake child sexual abuse material on multiple mainstream and dark-web platforms. The volume of this material is growing at a rate that outpaces removal efforts.
The Legal Situation Worldwide
The legal response to deepfakes, particularly non-consensual intimate deepfakes, is evolving rapidly but remains inconsistent across jurisdictions.
Images of Under-18s
In most countries, AI-generated sexual images of people under 18, regardless of whether any real intimate images were used in their creation, constitute child sexual abuse material under existing law. Creating, possessing, or distributing such images is a criminal offence in the United Kingdom, United States, Australia, Canada, and most European countries. This includes cases where no real intimate images of the child exist and the image was created entirely from innocent photographs of their face.
Non-Consensual Intimate Images of Adults
Legislation specifically addressing non-consensual intimate images, including AI-generated versions, is being passed in an increasing number of jurisdictions. The UK's Online Safety Act 2023 included provisions addressing this. Several US states have passed laws specifically targeting deepfake intimate images. The EU's Digital Services Act includes provisions relevant to platform liability for such content.
However, gaps remain, and in many countries the legal framework has not yet fully addressed the specific challenges of AI-generated imagery. Victims and families should seek advice from police or legal professionals about the options available in their specific jurisdiction.
Warning Signs and Detection
Unlike many online harms, the creation of deepfake images of someone can happen entirely without their knowledge. A young person may become a victim without doing anything that could have indicated risk. However, some warning signs that this may have occurred include:
- A young person expressing distress about something that happened online that they are reluctant to describe
- A classmate or peer group behaving unusually towards them, including mockery, avoidance, or hostile comments that seem connected to online content
- The young person receiving messages referencing images or appearing to involve blackmail
- Reports from friends that images of the young person are circulating in group chats
If a young person discovers that deepfake intimate images of them exist and are circulating, they should be supported to document evidence, report to the platform, and contact authorities, with full parental support and without any implication that they are responsible.
What to Do If Deepfake Images Are Created of Your Child
Immediate Steps
- Do not view or download the images beyond what is necessary to document their existence. In most jurisdictions, images of under-18s in sexual situations, including AI-generated ones, are illegal to possess.
- Screenshot or document evidence of where the images exist, what platforms they appear on, and any information about who created or shared them.
- Report to the platform immediately using the most urgent category available. Most major platforms have specific reporting routes for non-consensual intimate images and will prioritise removal.
- Contact the police. This is a serious criminal matter. Most police services now have specialist units for online crimes against children.
Support Your Child
Being the victim of deepfake intimate imagery is a profound violation. Your child needs to know unequivocally that this is not their fault in any way, that you believe them and are fully on their side, and that you will handle the practical aspects of reporting and removal so that they can focus on their wellbeing.
Professional counselling or therapy for your child is strongly recommended. The psychological impact of this kind of violation can be significant, and early therapeutic support makes a meaningful difference.
How Schools Should Respond
If deepfake images involve students at the same school, the school has a responsibility to respond. An effective school response includes:
- Taking the disclosure seriously and acting promptly
- Involving parents of all students concerned
- Contacting the police rather than attempting to manage a criminal matter internally
- Providing support to the victim, including counselling and protection from further peer harassment
- Addressing the wider school community with education about consent, digital harm, and the serious legal and moral consequences of creating or sharing such images
Prevention: What Families and Schools Can Do
Complete prevention of deepfake targeting is not possible because it requires only publicly available images of someone's face. However, reducing the supply of images available on public profiles is sensible, and education is essential.
Young people should understand that:
- Creating deepfake intimate images of anyone, regardless of whether they are real images or entirely AI-generated, is potentially criminal and certainly a serious moral violation
- Sharing or requesting such images carries criminal liability
- The harm caused to victims is real and severe
- Being targeted is never the victim's fault
These conversations need to happen in schools and at home. The technology is here and its misuse is already widespread. Education is the most powerful tool we have.
The Broader Digital Literacy Context
Deepfakes also affect young people in a different but related way: as consumers of media, they need to develop critical skills for evaluating whether images and videos are authentic. In an environment where highly realistic synthetic content can be created instantly, the instinct to accept visual media as truthful is increasingly dangerous.
Building media literacy, the ability to critically evaluate the source, authenticity, and intent of content encountered online, is one of the most important educational investments families and schools can make in the current technological landscape. This is not a technical skill alone but a critical thinking practice that applies across all areas of digital life.