UK Technology Firms and Child Safety Officials to Examine AI's Ability to Create Exploitation Content

Technology companies and child protection organizations will be granted permission to evaluate whether AI systems can generate child exploitation images under recently introduced UK laws.

Substantial Rise in AI-Generated Illegal Content

The announcement coincided with findings from a protection monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the changes, the authorities will allow designated AI developers and child safety organizations to inspect AI models – the underlying systems for chatbots and visual AI tools – and ensure they have sufficient safeguards to stop them from creating depictions of child exploitation.

"Ultimately about stopping exploitation before it happens," stated Kanishka Narayan, noting: "Experts, under strict protocols, can now identify the danger in AI systems promptly."

Addressing Regulatory Challenges

The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot generate such content as part of a testing regime. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is aimed at preventing that issue by enabling to halt the production of those images at their origin.

Legal Structure

The changes are being added by the government as modifications to the crime and policing bill, which is also establishing a prohibition on possessing, creating or distributing AI models designed to create child sexual abuse material.

Practical Impact

This recently, the minister visited the London headquarters of Childline and heard a mock-up call to counsellors featuring a report of AI-based exploitation. The call portrayed a adolescent seeking help after facing extortion using a explicit deepfake of themselves, constructed using AI.

"When I hear about young people facing blackmail online, it is a cause of intense frustration in me and justified concern amongst parents," he stated.

Alarming Data

A leading online safety foundation stated that instances of AI-generated exploitation content – such as online pages that may include multiple images – had significantly increased so far this year.

Instances of the most severe content – the gravest form of abuse – rose from 2,621 visual files to 3,086.

  • Girls were overwhelmingly victimized, accounting for 94% of illegal AI images in 2025
  • Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "constitute a crucial step to guarantee AI products are secure before they are released," stated the chief executive of the online safety foundation.

"Artificial intelligence systems have made it so victims can be targeted all over again with just a few clicks, giving offenders the capability to make potentially limitless quantities of advanced, photorealistic child sexual abuse material," she continued. "Material which further commodifies survivors' suffering, and makes children, particularly female children, less safe on and off line."

Counseling Interaction Data

The children's helpline also released information of support sessions where AI has been referenced. AI-related harms discussed in the conversations include:

  • Employing AI to rate body size, body and appearance
  • Chatbots dissuading children from consulting safe guardians about harm
  • Being bullied online with AI-generated material
  • Digital blackmail using AI-manipulated pictures

Between April and September this year, Childline conducted 367 support sessions where AI, chatbots and related terms were discussed, significantly more as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 sessions were related to mental health and wellbeing, encompassing using chatbots for assistance and AI therapy applications.

Jessica Anderson
Jessica Anderson

A passionate gamer and tech reviewer with over a decade of experience in analyzing games and sharing insights to help others level up.