AI, Deepfakes, and Image Manipulation: How to Protect Yourself in the Age of Synthetic Media
Synthetic media is becoming increasingly difficult to detect. Here is what young adults need to know about deepfakes, AI-generated images, and how to protect themselves online.
What Is Synthetic Media and Why Does It Matter?
Synthetic media refers to any audio, video, image, or text that has been generated or significantly altered using artificial intelligence. The term encompasses everything from AI-generated portraits to fully fabricated videos of real people saying things they never said. Deepfakes, a portmanteau of "deep learning" and "fake", sit at the more alarming end of this spectrum.
For young adults navigating social media, dating apps, and digital workplaces, synthetic media is no longer a distant theoretical threat. It is a present reality. AI tools capable of producing convincing fake images or video are widely accessible, often free, and require very little technical skill to use. Understanding the landscape is the first step toward protecting yourself.
How Deepfakes Are Created
Modern deepfakes rely on a type of machine learning architecture called a generative adversarial network, or GAN. Two neural networks compete against each other: one generates synthetic content and one tries to identify whether the content is real or fake. Over many thousands of iterations, the generator becomes extraordinarily good at producing convincing output.
More recently, diffusion models have taken over as the dominant method for generating still images. Tools built on these models can produce photorealistic portraits, manipulate existing photographs, or swap faces with a few clicks. Some applications can generate a convincing likeness of a real person using only a handful of source images scraped from social media.
Video deepfakes are more computationally demanding but increasingly accessible. Several mobile applications now offer face-swapping video features. Audio deepfakes, sometimes called voice cloning, can replicate a person's speech patterns after analysing only a short sample of their voice. The convergence of these technologies means that fabricating a convincing multimedia impersonation of someone is no longer the exclusive domain of well-funded actors.
The Real-World Harms of Image Manipulation
The harms caused by deepfakes and AI image manipulation are wide-ranging and affect people across different countries and contexts.
Non-consensual intimate imagery, sometimes referred to as NCII, represents one of the most serious misuses. AI tools have been used to generate explicit images of real, identifiable people without their consent, and to distribute these images online. Research from organisations tracking this abuse consistently shows that women and girls are disproportionately targeted, though people of all genders are affected. Several countries, including the United Kingdom, Australia, and South Korea, have introduced or strengthened legislation specifically targeting this form of abuse.
Political and reputational manipulation is another major concern. Fabricated videos of politicians, journalists, and public figures have circulated widely in countries including India, Nigeria, and the United States, influencing public opinion and spreading misinformation ahead of elections. Even when a deepfake is eventually debunked, the initial damage to a person's reputation or the spread of a false narrative can be difficult to reverse.
Financial fraud involving synthetic media is also rising. Voice clones have been used to impersonate family members or authority figures in phone scams, convincing victims to transfer money urgently. In one widely reported case in Hong Kong in 2024, a finance worker was tricked into transferring millions of dollars after attending a video call with what appeared to be company executives, all of whom were deepfakes.
How to Spot AI-Generated or Manipulated Media
Detecting synthetic media is becoming harder as the technology improves, but there are still observable signs that can raise suspicion.
For images, look for inconsistencies in fine details. AI-generated portraits often struggle with hands, producing fingers that merge, multiply, or distort in unnatural ways. Backgrounds may contain repeating patterns or blurred areas that do not match the apparent depth of field. Text within an image is frequently garbled or nonsensical. Jewellery, glasses, and hair at the edges of faces sometimes appear smeared or asymmetrical. Lighting and shadow directions may be inconsistent across different parts of the image.
For video deepfakes, watch for unnatural blinking or the absence of blinking, facial expressions that do not quite match the emotional tone of the speech, and slight misalignment between lip movements and audio. Skin texture may look artificially smooth or waxy. The edges of the face, particularly around the hairline and ears, are common failure points for face-swap technology.
For audio, listen for a flat or slightly metallic vocal quality, unusual pacing, or a lack of natural breathing sounds and verbal fillers. Voice clones often struggle to replicate the full emotional range of a real person's speech.
Several tools exist to assist with detection. Microsoft's Video Authenticator, Google's SynthID (applied to AI-generated content from Google products), and various academic research tools can flag probable synthetic content. However, no automated tool is perfectly reliable. Treating detection as one layer of a broader critical media literacy practice is more useful than relying on any single solution.
Protecting Your Own Images and Likeness
Reducing the number of images of yourself that are publicly accessible is the most straightforward preventive measure. Keeping social media accounts private limits the pool of source material available to someone seeking to create a synthetic likeness of you. Reviewing what is publicly visible on your profiles periodically is good practice.
Be thoughtful about where you share high-resolution images of your face, particularly ones that show your face from multiple angles or in different lighting conditions. Photographs shared in large semi-public spaces, such as a university group chat or a community forum, can end up in places you did not anticipate.
Watermarking images you share publicly can provide a degree of deterrence, though it does not prevent misuse. Some photographers and content creators use invisible digital watermarks or metadata tools that allow them to trace where an image originated if it is misused. For most people, visible watermarks on personal photographs are impractical, but they are worth considering if you have a public profile.
Consider using reverse image search tools periodically to check whether images of you are appearing in unexpected contexts. Google Images, TinEye, and Yandex Images all offer this functionality. Some services also offer monitoring features that will alert you if a new match for an image appears online.
What to Do If You Are Targeted
If you discover that someone has created or distributed synthetic media of you without your consent, there are several steps you can take.
Document everything before attempting to have content removed. Take screenshots or save copies of the offending material, including URLs, usernames, timestamps, and any messages associated with the content. Removal requests and legal complaints are strengthened by thorough documentation.
Report the content to the platform where it has been posted. Most major social media platforms have specific policies against non-consensual intimate imagery and impersonation, and dedicated reporting channels for these violations. Meta, Google, TikTok, and others have pledged to remove NCII promptly. The StopNCII.org tool, developed in the United Kingdom and now operating internationally, allows victims to create a digital fingerprint of an image without uploading the image itself, which platforms can then use to identify and remove matching content automatically.
Contact local authorities if the content constitutes a criminal offence in your jurisdiction. Legal frameworks vary significantly between countries. The United Kingdom's Online Safety Act, Australia's Online Safety Act, and similar legislation in the European Union under the Digital Services Act create obligations on platforms and, in some cases, criminal liability for perpetrators. If you are unsure of your legal options, organisations such as the Revenge Porn Helpline in the UK or the Cyber Civil Rights Initiative in the United States offer advice and support.
Seek emotional support. Being targeted by image-based abuse or impersonation is a serious violation, and the psychological impact is well documented. Talking to a trusted friend, family member, or mental health professional is important alongside taking practical action.
Critical Media Literacy in a Synthetic Age
Being a thoughtful consumer of media has always required some scepticism, but the proliferation of synthetic media demands a more active approach. Pausing before sharing content that provokes a strong emotional reaction is valuable. Checking the original source of a striking video or image, and looking for independent corroboration from established news organisations, helps prevent the spread of misinformation.
Lateral reading, a technique promoted by organisations including the News Literacy Project and studied by researchers at the Stanford History Education Group, involves opening new browser tabs to search for information about a source rather than reading deeply into a single piece of content. This approach helps identify whether a claim or piece of media has been verified by others or whether concerns about its authenticity have already been raised.
Being aware of your own cognitive biases matters too. Research consistently shows that people are more likely to believe synthetic or misleading content that confirms views they already hold. Recognising this tendency in yourself is not easy, but it is a meaningful guard against manipulation.
The Regulatory and Platform Response
Governments and technology platforms are beginning to respond to the challenges posed by synthetic media, though the pace of regulation has not kept up with the pace of technological development.
The European Union's AI Act, which came into force in 2024, requires that certain AI-generated content be labelled as such, and places obligations on providers of high-risk AI systems. In China, regulations introduced in 2022 and expanded since require that deepfake content be clearly labelled and that platforms verify the identity of users creating such content. The United States has taken a more fragmented approach, with individual states passing laws on deepfakes in electoral contexts or intimate imagery while federal legislation has moved slowly.
Several major AI image generation platforms have introduced their own safeguards, including filters designed to prevent the generation of images depicting real, identifiable people, and Content Credentials systems that embed provenance information into generated images. These are meaningful steps, but they are not universal and can be circumvented by those determined to misuse the technology.
Looking After Your Digital Wellbeing
The anxiety that can accompany awareness of deepfakes and synthetic media is real and understandable. For young adults who have grown up sharing their lives online, the idea that those shared images could be weaponised is unsettling. Maintaining a proportionate perspective is important: the vast majority of people will never be targeted by deepfake abuse, and awareness of the risks is itself a form of protection.
Staying informed about developments in this area without becoming consumed by anxiety about it is a balance worth cultivating. Following reputable sources on digital rights and online safety, reviewing your privacy settings periodically, and having a clear sense of what you would do if you were ever targeted will leave you better prepared than most.
The technology will continue to evolve, and the social and legal responses to it will evolve alongside it. Engaging with these issues thoughtfully, supporting stronger protections for those who are targeted, and treating the people around you with care and respect online are all part of building a digital culture that is safer for everyone.