✓ One-time payment no subscription7 Packages · 38 Courses · 146 LessonsReal-world safety, wellbeing, and life skills educationFamily progress tracking included🔒 Secure checkout via Stripe✓ One-time payment no subscription7 Packages · 38 Courses · 146 LessonsReal-world safety, wellbeing, and life skills educationFamily progress tracking included🔒 Secure checkout via Stripe
Home/Blog/Digital Safety
Digital Safety9 min read · April 2026

Online Hate Speech: How It Affects Young People and What Families Can Do

Young people encounter hate speech online daily, whether directed at them, their communities, or others. This guide explains how online hatred affects teenagers, the different forms it takes, and how families can respond.

What Is Online Hate Speech?

Online hate speech refers to content that attacks or demeans individuals or groups based on characteristics including race, ethnicity, religion, gender, sexual orientation, disability, or national origin. It ranges from slurs and insults to content that dehumanises entire communities, promotes violent ideologies, or calls for discrimination. It appears in social media posts and comments, gaming lobbies, messaging apps, video content, and online forums.

Young people encounter hate speech online with striking regularity. A 2023 report by UNICEF found that approximately one in three young people in 30 countries reported experiencing online bullying and hate, with many encountering content targeting their race, religion, or identity. The internet has not created hatred, but it has provided it with unprecedented reach, anonymity, and platforms that in some cases actively amplify divisive content because it drives engagement.

The Different Forms Online Hate Takes

Understanding the range of forms online hate speech takes helps families and young people recognise it across different contexts.

Direct targeting involves hate speech directed at a specific individual because of their identity. A teenager who posts about their religion, ethnicity, sexuality, or disability may receive comments, messages, or replies that attack them on the basis of that identity. This can be an isolated incident or part of a coordinated campaign involving multiple accounts.

Community-targeted content involves hate directed at groups rather than specific individuals, but which is nonetheless experienced personally by members of those groups. A Muslim teenager who encounters anti-Islamic content, a Black teenager who sees racist memes, or a disabled young person who encounters content mocking disability all experience a direct impact even when they are not individually named.

Coded and ironic hate speech uses humour, irony, plausible deniability, or coded language to convey hateful content while avoiding easy moderation. The phrase just joking is frequently used to deflect accountability. This form is particularly difficult for young people to challenge because doing so risks being dismissed as oversensitive.

Algorithmic amplification of hate content occurs when platforms recommend increasingly extreme or hateful content because it generates engagement. A teenager who watches a moderately contentious video may be recommended progressively more extreme content, potentially including genuinely hateful material, because the algorithm prioritises watch time over content quality.

Normalised hate in online communities refers to the way certain online spaces, including some gaming communities, forums, and social media groups, have developed cultures in which derogatory language about particular groups is treated as normal banter. Young people who spend time in these spaces may absorb attitudes and language patterns that go unchallenged within the community.

How Online Hate Affects Young People

The psychological impact of encountering or being targeted by online hate speech is substantial and well-documented. Teenagers who experience hate speech directed at their identity report higher rates of anxiety, depression, and reduced sense of safety and belonging. For young people whose identity is already stigmatised in offline contexts, online hate can reinforce and amplify existing experiences of marginalisation.

Research from the Cyberbullying Research Center and similar organisations has found that identity-based online harassment causes more significant psychological harm than general online harassment, because it attacks the young person at the level of who they are rather than what they have done. Young people who are targeted for their ethnicity, religion, sexuality, or disability often describe a particular sense of powerlessness and dehumanisation.

Young people from minority groups who encounter hate speech directed at their communities may develop what researchers describe as hypervigilance: an ongoing state of alertness for potential threats that is emotionally exhausting and that can affect their willingness to engage online at all. Some disengage from social media, gaming, or online communities to avoid exposure, losing access to digital spaces that have genuine value for their social and educational lives.

For young people who are questioning their identity, including LGBTQ+ teenagers exploring their sexuality or gender identity, online spaces can be both a source of affirming community and a source of intense hostility. The same internet that connects a gay teenager in a rural community with others like them can also expose them to organised religious hatred or political rhetoric designed to deny their legitimacy.

From HomeSafe Education
Learn more in our Nest Breaking course — Young Adults 16–25

Hate Speech and Radicalisation

For a smaller number of young people, repeated exposure to online hate speech is part of a pathway toward radicalisation. Online ecosystems that normalise hatred of particular groups, that frame violence as justified, or that provide a sense of community and belonging around shared hostility can be particularly appealing to young people who feel alienated, marginalised, or in search of clear explanations for complex problems.

The relationship between consuming hate speech and adopting hateful beliefs is not straightforward or inevitable. Many young people encounter hateful content online without internalising it. However, for those who are already vulnerable, the combination of algorithmic recommendation, community reinforcement, and the sense of belonging that some hate-oriented online communities offer can constitute a genuine radicalisation risk.

Early warning signs include increasingly extreme statements about particular groups, dismissal of the humanity of those groups, references to extreme ideological content, and social withdrawal from those who do not share these views. These signs warrant serious attention and sensitive engagement from trusted adults rather than punishment or dismissal.

What Young People Can Do

Young people who encounter online hate speech have several practical options. The most important first step is to recognise that the hate is about the perpetrator, not about them or their community. Hate speech reflects the perpetrator's ignorance, fear, or deliberate cruelty, not any truth about the person or group being targeted.

Documenting hate speech through screenshots, including the account name and the date, creates a record that can be used for reporting. Most platforms provide specific reporting mechanisms for hate speech, and reporting content, while not always resulting in immediate removal, contributes to patterns of behaviour that platforms use in enforcement decisions.

Blocking and muting accounts that produce hateful content is a straightforward protective action. Young people do not owe anyone access to their online presence, and there is no obligation to engage with or respond to hate speech. Engaging with hate often escalates rather than resolves the situation.

Talking to a trusted adult about hate speech experienced online is important, particularly when it is persistent, targeted, or causing significant distress. Young people from communities that are frequently targeted by online hate may feel that adults do not understand the experience or will dismiss its significance. Creating environments where these conversations can happen without minimisation is a key responsibility for parents and carers.

What Families Can Do

Parents and carers of young people who encounter online hate speech can provide significant support through validation, practical guidance, and advocacy. Validating that the experience is real and harmful, and avoiding minimisation through comments like just ignore it or it does not mean anything, is the essential starting point.

Understanding the specific platforms and contexts where your child spends time online allows more relevant conversations about safety. Knowing that a particular gaming community has a reputation for toxic behaviour, or that a specific type of content tends to generate hateful responses, allows targeted conversations rather than generic advice.

Engaging with the school if hate speech involves other students, or if content is affecting your child's ability to engage in school-related online activities, ensures that institutional responses are available as well as family support.

For parents whose children are from communities frequently targeted by online hate, it is also worth acknowledging directly that this form of online experience is specifically linked to identity and that it is unjust. Young people who feel that their experience is understood and taken seriously are better equipped to cope with it than those who are advised simply to toughen up or not let it bother them.

Platform Accountability and What Is Being Done

Major platforms have policies against hate speech and dedicated trust and safety teams responsible for enforcement. In practice, enforcement is inconsistent, and the sheer volume of content means that much hate speech goes unmoderated unless reported. Regulatory pressure in multiple countries is driving stronger enforcement obligations, with legislation in the European Union, UK, Australia, and elsewhere introducing specific requirements for platforms to address illegal hate speech.

These systemic responses matter and are improving, but they operate at a different timescale from the immediate experiences of young people encountering hate today. The practical skills of recognition, reporting, protection, and support remain essential even as platform policies and regulatory frameworks continue to develop.

More on this topic

`n