Generative AI Escalates Risks to Child Safety, Experts Warn

Experts raise alarm over generative AI's role in enhancing threats to child safety via CSAM.

Key Points

  • • Generative AI facilitates the creation and distribution of child sexual abuse material (CSAM).
  • • Experts liken the situation to a global pandemic, with predators using AI to locate and manipulate children's images.
  • • The Internet Watch Foundation reports a rise in AI-generated deepfake CSAM on both the dark web and clear web.
  • • Legislative measures like the Kids Online Safety Act are being proposed to combat these escalating threats.

On August 1, 2025, experts highlighted the alarming threats posed by generative AI to child safety, particularly concerning the creation and distribution of child sexual abuse material (CSAM). Greg Schiller, CEO of the Child Rescue Coalition, described the proliferation of CSAM facilitated by generative AI as a potential global pandemic. The ability of predators to manipulate and create disturbing images using advanced AI tools poses new challenges for child protection efforts.

Schiller emphasized that generative AI can be employed to locate and alter images of children, exploiting material from public sources. This capability not only allows for the creation of violent sexual scenarios but also aids in the establishment of fake identities online. The U.K.'s Internet Watch Foundation (IWF) has observed a significant uptick in AI-generated CSAM, including deepfake content, which integrates children's faces into adult pornography. Such developments highlight the increasing complexity of the imagery being generated, impacting not just the dark web but also the accessible internet, where offenders find justification for their actions.

In addition, predators increasingly exploit these technologies to create deceptive social media profiles intended to lure minors, effectively normalizing abusive behaviors. The National Center for Missing and Exploited Children (NCMEC) reports a worrying trend of 'sextortion,' where explicit AI-generated images are used to blackmail vulnerable children. This evolution complicates law enforcement's efforts, potentially leading to investigations focusing on fabricated child victims rather than addressing real threats.

The urgency of the situation has prompted discussions about legal protections, including proposed legislation like the Kids Online Safety Act, which seeks to enhance online safety measures. Experts warn that legal frameworks are struggling to keep pace with the rapid adoption of new technologies by offenders. Schiller calls for greater parental awareness and engagement to help mitigate these emerging risks and protect children in an increasingly dangerous digital landscape.