✓ One-time payment no subscription7 Packages · 38 Courses · 146 LessonsReal-world safety, wellbeing, and life skills educationFamily progress tracking included🔒 Secure checkout via Stripe✓ One-time payment no subscription7 Packages · 38 Courses · 146 LessonsReal-world safety, wellbeing, and life skills educationFamily progress tracking included🔒 Secure checkout via Stripe
Home/Blog/Digital Safety
Digital Safety8 min read · April 2026

Recognising and Reporting Hate Speech Online: A Guide for Teenagers and Parents

Online hate speech targets teenagers based on race, religion, sexuality, disability, and other characteristics. This guide helps young people recognise what hate speech is, understand its harm, know how to report it effectively, and protect their mental health when they encounter it.

What Is Hate Speech?

Hate speech is communication that attacks people based on characteristics such as race, ethnicity, religion, gender, sexual orientation, disability, or national origin. Online hate speech takes many forms: direct slurs and insults, dehumanising comparisons, content that promotes discrimination, and harassment campaigns targeting people because of who they are.

Online hate speech affects teenagers in two main ways. First, many teenagers encounter hate speech that targets groups they belong to, which can cause significant psychological harm. Second, teenagers are also sometimes producers or sharers of hate speech, often without fully understanding its impact or the potential legal consequences.

Why Online Hate Speech Causes Real Harm

Research has documented consistent links between exposure to online hate speech and negative mental health outcomes in targeted groups. Teenagers who regularly encounter content that dehumanises or degrades them based on their identity report higher rates of anxiety, depression, and reduced sense of belonging and safety online.

The psychological impact is amplified by the scale and persistence of online hate. Unlike a hurtful comment in a corridor that a student might hear once, online content can be shared and viewed thousands of times, can resurface years later, and can appear in environments the target cannot easily exit.

For minority ethnic teenagers, LGBTQ+ teenagers, teenagers with disabilities, and others who are frequent targets of online hate, the cumulative effect of regular exposure represents a genuine public health concern.

What Hate Speech Looks Like Online

Online hate speech ranges from obviously extreme to subtler forms that are more difficult to identify:

Explicit hate speech: Direct use of slurs, explicit statements of contempt or hatred for a group, calls for discrimination or violence against a group.

Dehumanising content: Content that compares groups of people to animals or disease, denies their humanity, or presents them as inherently inferior.

Conspiracy theories targeting groups: Content that attributes global malign influence to specific racial, ethnic, or religious groups.

Coded language: Some hate speech uses coded terms or references that are understood within specific communities as slurs or dehumanising references while maintaining plausible deniability. These can be harder to identify and report.

Harassment campaigns: Organised targeting of individuals based on their identity, flooding their mentions, profiles, or comment sections with hostile content.

How to Report Hate Speech on Major Platforms

Instagram and Facebook: Use the three-dot menu on any post, comment, or profile to access the reporting tool. Select the option most accurately describing the content: Hate Speech is usually one of the specific categories available.

TikTok: Hold down or long-press a video or comment to access reporting options. TikTok has a specific Hate and Harassment category for content that targets people based on protected characteristics.

From HomeSafe Education
Learn more in our Street Smart course — Teenagers 12–17

YouTube: Click the three-dot menu on any video or comment and select Report. Choose Hateful or Abusive Content and the most accurate subcategory.

Twitter/X: Use the three-dot menu on any tweet to report. Select Hate or the equivalent category for content targeting protected characteristics.

Snapchat, Discord, WhatsApp: All have in-app reporting for hate speech and harassment accessible from individual messages, conversations, or profiles.

When reporting, be as specific as possible about why the content violates the platform's policies. Screenshot evidence before reporting in case the content is deleted before a decision is made.

When Platforms Do Not Act

Platform reporting does not always result in content removal. If repeated reports have not resulted in action on content that clearly violates platform policies:

  • Escalate by contacting the platform's Trust and Safety team directly through their website rather than using in-app reporting alone
  • Report to relevant national authorities. In the UK, the Online Safety Act has introduced new obligations for platforms, and Ofcom is responsible for enforcement. Other countries have equivalent regulatory bodies.
  • Document your reporting attempts, as this creates a record that is useful for escalation

If You Are Targeted

If you are being targeted with hate speech because of your identity:

  • You do not have to engage with it. Responding to hate speech often amplifies it further through the algorithm. Blocking and reporting without engaging is usually more effective.
  • Document before blocking. Screenshots provide evidence for reports and, if the situation is severe enough, for police reports.
  • Tell a trusted adult. If online hate speech is causing significant distress or amounts to a sustained targeted campaign, adult support is important.
  • In cases of serious threats or sustained targeted harassment, this may constitute a criminal offence. Report to police and document everything.
  • Seek mental health support if you need it. The impact of sustained exposure to hate speech is real and deserves professional attention.

Producing Hate Speech: Understanding the Consequences

Some teenagers produce or share content that constitutes hate speech without fully understanding the impact or consequences. Teenagers who use slurs, share dehumanising content, or participate in pile-on harassment directed at someone's identity should understand:

  • The psychological harm this causes to real people is genuine and serious
  • Platform bans can result in permanent loss of accounts and communities
  • In many countries, producing or sharing hate speech can be a criminal offence
  • Content shared online creates a permanent record that can resurface in future education or employment contexts

Conclusion

Online hate speech is a significant harm that affects many teenagers, particularly those from minority groups. Knowing how to recognise it, report it effectively, and protect your own mental health when you encounter it are important parts of navigating the internet safely. No young person should have to tolerate content that attacks them for who they are.

More on this topic

`n