How to Spot a Deepfake Video: A Practical Guide to Protecting Yourself and Your Family
Deepfake videos are becoming alarmingly convincing, but they still leave telltale clues. Learn the practical, expert-backed techniques anyone can use to identify manipulated video content before it causes real harm.
Why Learning How to Spot a Deepfake Video Matters More Than Ever
In January 2024, digitally fabricated explicit images of Taylor Swift spread across X (formerly Twitter), reaching tens of millions of views before the platform could intervene. A few months earlier, a finance worker in Hong Kong transferred £20 million to criminals after a video call with what appeared to be his company's chief financial officer. It was a deepfake. These are not edge cases anymore. They are warnings.
According to research published by Sumsub, the number of detected deepfakes globally increased tenfold between 2022 and 2023, with the UK ranking as the third most targeted country worldwide. By early 2026, the World Economic Forum estimates that AI-generated synthetic media accounts for roughly 8% of all video content shared on major social platforms. The technology that once required Hollywood budgets now sits in free smartphone apps.
Knowing how to spot a deepfake video is no longer a niche technical skill. It is a basic digital safety competency, as essential as recognising a phishing email or checking a website's padlock icon. This guide will give you the practical knowledge to protect yourself, your children, and your older relatives from manipulated video content.
What Exactly Is a Deepfake?
A deepfake is a piece of synthetic media, typically video or audio, created using artificial intelligence to make it appear that someone said or did something they never actually did. The term combines "deep learning" (a branch of machine learning) with "fake."
How deepfakes are made
Most deepfakes rely on a type of AI architecture called a generative adversarial network, or GAN. In simple terms, two neural networks compete against each other. One generates fake content; the other tries to detect it. Through thousands of iterations, the generator becomes remarkably skilled at producing convincing results.
The three main types
Face-swap deepfakes place one person's face onto another person's body. These are the most common and make up the majority of malicious deepfakes online. Face re-enactment deepfakes manipulate an existing video so the person appears to say different words or show different expressions. Fully synthetic deepfakes generate an entirely fictional person who has never existed, often used in scam advertisements and fake social media profiles.
The Visual Clues: How to Spot a Deepfake Video With Your Own Eyes
Despite rapid improvements in AI generation, deepfake videos still produce consistent visual artefacts. Training yourself to notice these details is your strongest first line of defence. Let us work through them systematically.
Watch the eyes carefully
Human eyes are extraordinarily complex, and AI still struggles to replicate them convincingly. Look for irregular blinking patterns; a 2023 study from the University of Albany found that deepfake subjects blink 40% less frequently than real people in comparable video settings. Check whether light reflections in both eyes match. In authentic footage, the small white reflections (called catchlights) should appear in the same position in both eyes because both eyes are receiving light from the same source. Deepfakes frequently get this wrong.
Examine the skin and face boundaries
Pay close attention to the boundary where the face meets the hairline, ears, and neck. Deepfakes often show a subtle but visible "seam" where the generated face blends into the original footage. The skin texture may appear unnaturally smooth in some areas and overly detailed in others. Pores, fine lines, and moles may appear and disappear between frames. If you pause the video and step through it frame by frame, these inconsistencies become far more obvious.
Check the teeth and mouth
Teeth remain one of deepfake technology's persistent weak spots. Look for teeth that appear to merge together, lack individual definition, or change shape as the person speaks. The interior of the mouth, including the tongue, is often rendered as a vague dark area rather than showing realistic anatomical detail. When the person speaks, watch whether lip movements match the audio precisely. Even a slight mismatch of a fraction of a second is a strong indicator of manipulation.
Look at the ears and jewellery
Ears are uniquely shaped and asymmetrical in real people. Deepfakes frequently produce ears that look slightly different from frame to frame or appear unusually symmetrical. Earrings, glasses, and other accessories near the face boundary are particularly difficult for AI to render consistently. Watch for jewellery that flickers, changes shape, or partially disappears during movement.
Observe head and body movement
Many deepfakes are generated from relatively still source material. When the subject turns their head significantly, particularly beyond a 30-degree angle, you may notice warping, blurring, or momentary distortion. The neck and shoulders may not move naturally in coordination with the head. In lower-quality deepfakes, the body may appear to belong to someone of a slightly different build than the face suggests.
The Audio Clues You Should Not Ignore
Video deepfakes are increasingly paired with cloned audio, making detection a multi-sensory task. Do not rely on visual inspection alone.
Listen for unnatural speech patterns
AI-generated voice cloning has improved dramatically, but it still struggles with certain elements of natural speech. Listen for unusually consistent pacing, as real people vary their speaking rhythm naturally. Breathing sounds may be absent or oddly placed. Emotional inflection often sounds subtly flat or mismatched to the content being spoken. The pronunciation of unusual words, names, or technical terms may sound slightly off.
Background audio inconsistencies
Authentic video typically contains ambient sound that matches the environment. A person apparently speaking outdoors should have wind noise, traffic, or birdsong. Deepfake audio is often generated in isolation and then layered onto the video, resulting in a background that sounds too clean, too consistent, or mismatched with the visual environment.
Context Clues: The Non-Technical Red Flags
You do not always need to analyse pixels and audio waveforms. Some of the most reliable deepfake indicators are contextual, and anyone can learn to spot them.
Ask the source question
Where did this video first appear? If a supposedly significant video of a public figure surfaces on an anonymous social media account rather than through established news organisations, treat it with immediate suspicion. Legitimate news outlets have verification processes. A shocking video that appears exclusively on TikTok, Telegram, or X without any corroborating coverage from the BBC, Reuters, or the Press Association deserves serious scrutiny before you believe it, let alone share it.
Apply the motivation test
Who benefits from this video existing? Deepfakes are created with intent. That intent is typically financial fraud, political manipulation, personal revenge, or harassment. If a video conveniently supports a particular narrative, damages a specific person's reputation, or pressures you to act quickly, particularly with money, pause and investigate further.
Check the timing
Deepfakes are frequently deployed during moments of high emotion or urgency: elections, financial crises, breaking news events, or personal emergencies. The criminals behind deepfake scams rely on your emotional reaction overriding your critical thinking. If a video makes you feel an overwhelming urge to act immediately, that urgency itself is a red flag.
Free Tools and Techniques for Deeper Verification
When your eyes and instincts raise an alarm, several accessible tools can help you investigate further.
Reverse image and video search
Take a screenshot from the suspicious video and run it through Google Reverse Image Search or TinEye. This can reveal whether the face belongs to a real public figure, whether the original unmanipulated footage exists elsewhere, or whether the same fake has been flagged by other users. For video specifically, the InVID WeVerify plugin for Chrome and Firefox allows you to fragment a video into keyframes and search each one individually. It is free and was developed specifically for journalists and fact-checkers.
AI detection tools
Several free and low-cost deepfake detection tools are now available to the public. Microsoft's Video Authenticator analyses individual frames and provides a confidence score indicating the likelihood of manipulation. Sensity AI operates a detection platform that has been used by governments and newsrooms across Europe. Deepware Scanner allows you to submit video URLs for automated analysis. It is worth noting that no detection tool is 100% accurate; they are best used as one component of a broader verification process rather than as a definitive answer.
Frame-by-frame analysis
If you have a downloaded copy of the video, open it in VLC Media Player (free on all platforms) and use the "E" key to advance one frame at a time. Many deepfake artefacts that are invisible at normal playback speed become glaringly obvious when you step through individual frames. Pay particular attention to moments of rapid movement, transitions between expressions, and the edges of the face.
Real-World Deepfake Scenarios: What to Watch For
Understanding common attack patterns helps you stay alert in the situations where deepfakes are most likely to target you.
Video call scams
The Hong Kong case mentioned earlier was not an isolated incident. By 2025, UK Action Fraud reported a 300% year-on-year increase in fraud cases involving real-time deepfake video calls. Criminals clone the face and voice of a colleague, family member, or authority figure to authorise payments or extract sensitive information. If you receive an unexpected video call requesting money or confidential data, hang up and call the person back on a verified number. This single habit can prevent significant financial loss.
Political misinformation
During the 2024 elections in the UK, US, and India, researchers from the Alan Turing Institute documented over 500 politically motivated deepfake videos across major platforms. Many were crude, but some were sophisticated enough to fool experienced journalists temporarily. Before the next UK general election, Ofcom has urged all citizens to verify political video content through at least two independent sources before sharing it.
Intimate image abuse
This is perhaps the most personally devastating use of deepfake technology. The Internet Watch Foundation reported in 2024 that AI-generated intimate imagery had increased by over 400% in a single year. Victims include children. In the UK, the Online Safety Act 2023, strengthened by subsequent amendments, now makes the creation and sharing of deepfake intimate images a specific criminal offence carrying up to two years in prison. If you or someone you know is a victim, report it to the police and the Revenge Porn Helpline (0345 6000 459) immediately.
Teaching Different Age Groups to Spot Deepfakes
Digital safety education works best when it is tailored to the learner's age and experience.
Children aged 7 to 12
Focus on the simple concept that "not everything you see online is real, even if it looks real." Use age-appropriate examples, such as AI-generated images of fictional animals, to demonstrate that computers can create convincing fakes. Establish the habit of asking a trusted adult before believing or sharing any surprising video content. The PSHE Association now includes media literacy modules for Key Stage 2 that cover basic synthetic media awareness.
Teenagers aged 13 to 17
Teens are heavy consumers of short-form video on TikTok, Instagram, and YouTube Shorts, making them highly exposed to deepfake content. Teach them the specific visual and audio detection techniques described in this guide. Discuss the legal consequences of creating or sharing deepfake content, particularly intimate imagery. Encourage critical evaluation of any video that provokes a strong emotional reaction before sharing. According to Ofcom's 2025 Children's Media Literacy report, only 28% of 13 to 17-year-olds felt confident in their ability to identify AI-manipulated video.
Older adults
Older adults are disproportionately targeted by deepfake video call scams. Age UK data from 2025 shows adults over 65 lost an average of £12,400 per deepfake fraud incident compared to £3,200 for younger demographics. Focus on the video call verification habit: if someone calls requesting money or personal information, always hang up and call back on a known number. Help them bookmark the InVID WeVerify tool and practise using reverse image search together.
What to Do If You Encounter a Suspected Deepfake
Having a clear action plan prevents panic and ensures you respond effectively.
Do not share it
This is the single most important rule. Every share amplifies the harm, regardless of whether you share it with a "can you believe this?" caption. Even sharing a deepfake to debunk it increases its visibility and reach. Research from MIT found that false content spreads six times faster than accurate content on social media. Do not contribute to that velocity.
Report it to the platform
All major social media platforms now have specific reporting categories for AI-generated or manipulated media. Use them. Platforms prioritise removal when content receives multiple reports, so your individual report genuinely matters. On YouTube, use the three-dot menu and select "Report" then "Misinformation" then "Manipulated media." On X, TikTok, Facebook, and Instagram, similar pathways exist under their reporting functions.
Preserve evidence
If the deepfake targets you or someone you know, take screenshots and screen recordings before reporting it. Save the URL, the username of the account that posted it, and the date and time you first encountered it. This evidence may be critical if law enforcement becomes involved. Store it securely and share it only with the police or legal advisers.
Report to authorities if appropriate
In the UK, you can report deepfake fraud to Action Fraud on 0300 123 2040 or online at actionfraud.police.uk. For deepfake intimate imagery, contact both the police and the Revenge Porn Helpline. For deepfakes targeting children, report to the Internet Watch Foundation at iwf.org.uk and the police immediately.
Staying Ahead: Building Long-Term Deepfake Resilience
Detection techniques that work today may be less effective in six months as the technology improves. Building genuine resilience means developing habits and mindsets rather than relying solely on specific technical tricks.
Cultivate healthy scepticism
Adopt the principle that extraordinary video claims require extraordinary verification. This does not mean distrusting everything you see online. It means developing a proportionate response: the more surprising, emotional, or consequential a video appears, the more verification it deserves before you accept it as genuine.
Keep your knowledge current
Deepfake technology evolves rapidly, and so do detection methods. Follow reputable sources such as the Alan Turing Institute, Full Fact (the UK's independent fact-checking organisation), and the BBC's Verify team. These organisations regularly publish updated guidance on identifying the latest generation of synthetic media.
Verify before you amplify
Make this your personal rule for all video content, not just suspected deepfakes. Before you share any video that could influence someone's opinion, reputation, or decisions, take thirty seconds to check: has a credible news source reported this? Can I find the original source? Does this pass the basic visual and contextual checks outlined in this guide? Those thirty seconds could prevent real harm to real people.
Learning how to spot a deepfake video is not about becoming paranoid or distrusting all digital media. It is about equipping yourself with the knowledge and habits to navigate an information environment that is genuinely more complex than it was five years ago. The technology will continue to advance. Your awareness, your critical thinking, and your willingness to pause before reacting will remain your most powerful defences.