Keeping Your Children Safe Online in the Age of AI

The internet your children are using today is fundamentally different from the one that existed even three years ago. Artificial intelligence has changed what threats look like, how quickly they spread, and how convincingly they disguise themselves.

In January 2026, a coalition of thirteen United Nations bodies - including UNESCO, UNICEF, ITU, and the UN Committee on the Rights of the Child issued a Joint Statement on Artificial Intelligence and the Rights of the Child, describing the risks as urgent and society's current ability to cope with them as dangerously inadequate.

This article condenses that global guidance into practical advice for families in South Africa.

Father-and-daughter explore the internet together
A father and daughter explore the internet together - the kind of ongoing, open engagement that research consistently identifies as the most effective protection against online harm.

 

Summary

AI has made the threats facing children online fundamentally more dangerous. Predators now use AI to groom children with automated, personalised contact. Ordinary photos posted to social media can be manipulated into explicit deepfakes. Scams targeting young people are more convincing than ever because AI has eliminated the spelling mistakes and awkward phrasing that used to give them away. In January 2026, thirteen United Nations bodies, including UNESCO, UNICEF, and the ITU - issued a joint statement warning that society's ability to protect children is not keeping pace with the technology.

For South African families specifically: POPIA gives you legal rights over your child's data online, but enforcement is limited and most platforms are not designed with children's wellbeing in mind. South African children rank among the most at-risk globally for cyberbullying and harmful online contact, with low levels of parental guidance cited as a key factor.

The most important things you can do right now:

  • Talk to your children openly about what they do and see online - ongoing conversation is more protective than any filter.
  • Explain that photos posted online, however innocent, can be weaponised by AI tools.
  • Make sure your child knows that sextortion and deepfake abuse are crimes, are not their fault, and must be reported.
  • Enable two-factor authentication and use a password manager on all family accounts.
  • Know where to report: the Film and Publication Board (www.fpb.org.za) and SAPS Cybercrime handle child exploitation cases. SADAG (0800 456 789) provides 24-hour emotional support.

On this page


The landscape has changed: what AI actually means for your child

Most cybersafety advice written before 2023 is now incomplete. The familiar warnings - don't talk to strangers, don't share your password, think before you post - remain valid, but they no longer cover the full picture.

AI has introduced a new class of threat. Generative AI tools can now produce realistic fake images, audio, and video from minimal source material. They can be used to impersonate people your child knows, to fabricate compromising content, and to power grooming strategies that are more sophisticated and more personalised than anything human predators could manage alone. The UN's January 2026 joint statement describes predators using AI to analyse a child's online behaviour, emotional state, and interests in order to tailor their approach precisely to that child's vulnerabilities.

The scale of the problem is difficult to overstate. A 2025 report from the Childlight Global Child Safety Institute found that technology-facilitated child abuse cases in the United States alone grew from 4,700 in 2023 to over 67,000 in 2024. That trajectory reflects a global reality, not an American one.


Child wearing headphones working at a laptop computer
Young people engage with online environments enthusiastically and uncritically. Structure and conversation matter more than any filter. Photo: Anil Sharma / Pexels

What South African parents specifically need to know

Research compiled in the Children's Online Safety Index found that South African children are among those at greatest risk of cyberbullying, risky online contact, and reputational harm - and that this risk is compounded by relatively low levels of parental guidance and online safety education.

The TransUnion Consumer Pulse report for the second quarter of 2025 found that nearly six in ten South Africans reported being targeted by digital fraud in the preceding three months. Children and teens are disproportionately vulnerable because scammers routinely disguise themselves as influencers, gaming sites, competitions, and online friends.

South Africa's Protection of Personal Information Act (POPIA) defines anyone under the age of 18 as a child, and requires that a parent or guardian provide consent before a child's personal information may be processed. This means that when your child signs up for a service, that service has legal obligations to you. However, enforcement remains limited, and most of the platforms your children actually use are not designed with their wellbeing as a primary consideration; a point the UN joint statement makes explicitly about AI tools and the companies that build them.


Understanding the specific AI threats

Woman looking concerned while reading her smartphone
AI-generated scams and deepfakes can reach any child on any device. The threats described are not hypothetical - they are happening now. Photo: Liza Summer / Pexels

Deepfakes and image abuse

AI tools can take ordinary photos - the kind your child posts on social media - and manipulate them into sexually explicit content. UNICEF reported in early 2026 that at least 1.2 million children disclosed having had their images manipulated this way in the preceding year. The European Parliament's research service projects that approximately eight million deepfakes will be shared in 2025, up from 500,000 in 2023, with pornographic content accounting for the overwhelming majority. Children are more vulnerable than adults to this threat because their cognitive development makes it harder to detect fake imagery, and because they tend to share images more freely.

The practical implication is that photos do not need to be compromising at the time of posting to become weaponised later. A school photo, a holiday picture, a selfie - any image can be source material.

AI-powered grooming

Traditional online grooming requires a predator to invest significant time building trust. AI dramatically lowers that barrier by enabling automated, personalised, sustained contact that adapts to a child's responses in real time. The UN joint statement specifically warns that predators can use AI to monitor a child's online behaviour and emotional state, constructing an approach tailored to that individual child.

Scams and phishing targeting young people

AI is used to generate highly convincing phishing messages, fake competitions, and fraudulent gaming content at a volume and quality that would previously have required considerable human effort. Your child may receive what appears to be a personalised message from a favourite creator, a school administrator, or even you. The quality of AI-generated text has now largely eliminated the spelling errors and awkward phrasing that used to be reliable warning signs.

Misinformation

AI-generated content can also include convincing fake news, fabricated events, and manipulated video of public figures. Young people who encounter this content without a framework for evaluating sources are more likely to accept and share it.


Practical guidance by age group

Children under 10

At this age, the most effective protection is structure, not software alone. Keep devices in shared family spaces. Watch content together whenever possible. Make conversations about the internet as normal as conversations about road safety - matter-of-fact, ongoing, and free of shame. Establish the principle that your child will always come to you if something online makes them feel uncomfortable, confused, or scared, without fear of punishment for having stumbled into something.

Parental controls are useful at this age but should be understood as training wheels rather than a permanent solution. The goal is to build judgment.

Children aged 10 to 13

This is the age at which peer influence becomes intense and the desire for privacy increases. It is also the age at which social media platforms become relevant, despite most platforms setting a minimum age of 13. Research consistently shows that age verification on these platforms is largely ineffective.

Have explicit, honest conversations about what social media platforms actually do with your child's data and attention. Explain that their profile, their posts, and their browsing behaviour are commercial products being sold to advertisers and, in many cases, to data brokers. This is not a lecture - it is genuinely useful information that most adults also lack.

At this age, introduce the concept of digital permanence: anything shared online can be screenshotted, forwarded, and exist indefinitely, regardless of what privacy settings suggest. This applies especially to images.

Introduce strong password habits now. A password manager is the most practical tool available. Two-factor authentication should be enabled on any account your child uses.

Teenagers aged 14 to 17

Teenagers need increasing autonomy, and attempts to surveil rather than educate typically backfire. The UN joint statement is clear that AI literacy - the ability to understand and critically evaluate AI-generated content - is currently dangerously low among young people, their teachers, and their caregivers alike. Building that literacy is now a core parenting responsibility.

Talk specifically about deepfakes. Explain what they are, how easily they can be created, and what to do if your teenager encounters one - whether as a victim or a bystander. The answer in all cases is to come to a trusted adult, not to try to manage it alone, and not to share the content further.

Discuss the reality that AI chat tools and social media algorithms are designed to be engaging, not beneficial. This is not a reason to avoid them, but it is a reason to be deliberate about how much time and trust is extended to them.

Make sure your teenager knows that sextortion - where fabricated or real intimate images are used to extort money or compliance - is a crime, that it is not their fault, and that there are places to report it. In South Africa, the South African Police Service Cybercrime unit and the Film and Publication Board both handle these complaints.


Having the conversation

Parents and Teen Discussing Online Safety
Children who regularly share their online experiences with a parent are significantly better equipped to handle threats when they arise. The relationship is the protection. Photo: August de Richelieu / Pexels

Research consistently shows that children who are already talking openly with their parents about online experiences are significantly better equipped to handle threats when they encounter them. The conversation does not need to be comprehensive, technical, or formal. It needs to be ongoing.

Ask your children to show you what they find interesting online, without immediately evaluating or criticising it. This builds the habit of sharing. When something concerning appears in the news - a cyberbullying case, a scam, a deepfake story - use it as a low-pressure opportunity to discuss the issue in the abstract before it becomes personal.

Be honest about your own uncertainty. Most parents know less about these technologies than their teenagers. Saying so, and learning alongside your child, is more effective than projecting authority you do not have.

Establish clear agreements rather than rules wherever possible. An agreement about screen time at meals is easier to maintain than a unilateral ban. Agreements about what to do if something goes wrong online are more valuable than any content filter.


Parents and Teens Discussing Online Safety
Parents and Teens Discuss Online Safety Openly

 

If something goes wrong

Ensure your child knows in advance what to do, so that they are not making decisions in a moment of panic or shame.

Do not delete evidence. Screenshots of threatening messages, fake accounts, or manipulated images are necessary for reporting. Report to the platform first - most platforms have reporting mechanisms for child safety violations, and legal obligations apply to them under South African law.

Contact the South African Police Service's Cybercrime unit, or report online child exploitation to the Film and Publication Board at www.fpb.org.za.

Seek emotional support. The harm caused by cyberbullying, image abuse, and predatory contact is real and can be serious. The South African Depression and Anxiety Group (SADAG) operates a 24-hour support line at 0800 456 789.


The broader picture

The thirteen UN bodies behind the January 2026 joint statement are not alarmist organisations. Their collective message is that the current moment requires all parts of society - governments, technology companies, schools, and families - to take responsibility, and that the gap between the pace of AI development and society's ability to manage its consequences is growing. AI literacy - the ability to understand what AI can and cannot do, where it comes from, and what interests it serves - is now a basic digital competency alongside reading and numeracy.

As the ITU's Director of Telecommunication Development put it when the statement was released: children are getting online at a younger age, and they should be protected. That protection begins at home, in ordinary conversation, before anything goes wrong.


Key resources and external links

The guidance in this article draws on the following authoritative sources. South African parents and educators are encouraged to consult them directly.

Global authorities

  • UN Joint Statement on AI and the Rights of the Child (January 2026): the most current global framework, co-signed by UNESCO, UNICEF, ITU, ILO, and nine other UN bodies.
    Download the statement (ITU.int, PDF)
  • UNICEF Policy Guidance on AI and Children, Version 3.0 (2025): practical recommendations for governments and industry, updated to address generative AI, deepfakes, and AI-generated child sexual abuse material.
    Read the guidance (UNICEF Innocenti)
  • UNICEF: Generative AI - Risks and Opportunities for Children: accessible overview of how generative AI creates new threats and what families should understand.
    Read the brief (UNICEF Innocenti)
  • ITU Child Online Protection Guidelines for Parents and Educators: foundational ITU guidance covering online risk categories, practical parenting strategies, and educator tools.
    Access the guidelines (ITU.int)
  • UN News: From deepfakes to grooming - AI threats to children (January 2026): accessible summary of the Joint Statement and the current threat landscape.
    Read the article (UN News)

South African contacts and reporting

  • Film and Publication Board (FPB): report child sexual exploitation material and other illegal online content.
    www.fpb.org.za
  • South African Police Service - Cybercrime Unit: report cybercrime including sextortion, online fraud targeting children, and AI-generated abuse material.
    SAPS contact and complaints (SAPS.gov.za)
  • South African Depression and Anxiety Group (SADAG): 24-hour emotional support for young people and caregivers dealing with the aftermath of online harm.
    www.sadag.org - Helpline: 0800 456 789
  • MySociaLife: South African digital wellness and online safety education organisation working with schools and parents.
    www.mysocialife.com

The Internet Service Providers' Association was established in 1996 and is the officially recognised internet industry representative body for South Africa. This article draws on the UN Joint Statement on Artificial Intelligence and the Rights of the Child (January 2026), UNICEF's Policy Guidance on AI and Children Version 3.0 (2025), the UNICEF Childlight Global Child Safety Institute report (2025), the TransUnion Consumer Pulse Q2 2025 report, and the Microsoft/UNICEF Children's Online Safety Index.