British Tech Companies and Child Safety Agencies to Test AI's Ability to Create Exploitation Content
Tech firms and child safety agencies will receive permission to assess whether AI tools can generate child abuse images under recently introduced UK legislation.
Significant Rise in AI-Generated Illegal Material
The declaration coincided with revelations from a safety monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
New Legal Structure
Under the amendments, the government will permit approved AI developers and child safety organizations to inspect AI models – the foundational technology for chatbots and image generators – and ensure they have adequate protective measures to prevent them from creating images of child sexual abuse.
"Fundamentally about stopping abuse before it happens," stated the minister for AI and online safety, adding: "Experts, under rigorous conditions, can now identify the risk in AI models promptly."
Tackling Legal Obstacles
The changes have been introduced because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot generate such images as part of a testing process. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This law is aimed at averting that problem by enabling to stop the production of those materials at their origin.
Legal Structure
The amendments are being introduced by the authorities as modifications to the criminal justice legislation, which is also implementing a ban on possessing, creating or sharing AI models designed to create exploitative content.
Practical Consequences
This recently, the official toured the London base of a children's helpline and heard a simulated conversation to counsellors featuring a account of AI-based exploitation. The call depicted a adolescent requesting help after facing extortion using a explicit deepfake of themselves, constructed using AI.
"When I hear about children facing blackmail online, it is a source of extreme frustration in me and justified concern amongst parents," he said.
Alarming Statistics
A prominent online safety foundation stated that instances of AI-generated abuse content – such as online pages that may include numerous images – had more than doubled so far this year.
Instances of category A material – the gravest form of exploitation – increased from 2,621 visual files to 3,086.
- Female children were overwhelmingly targeted, accounting for 94% of prohibited AI depictions in 2025
- Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025
Sector Reaction
The law change could "constitute a crucial step to guarantee AI products are secure before they are released," stated the chief executive of the online safety foundation.
"AI tools have made it so victims can be victimised all over again with just a simple actions, giving criminals the ability to make potentially limitless quantities of advanced, photorealistic exploitative content," she added. "Material which further commodifies survivors' trauma, and makes young people, particularly female children, less safe on and off line."
Support Interaction Information
Childline also released details of counselling interactions where AI has been mentioned. AI-related risks mentioned in the conversations comprise:
- Using AI to rate weight, body and appearance
- AI assistants dissuading children from consulting trusted guardians about harm
- Being bullied online with AI-generated material
- Online extortion using AI-manipulated pictures
During April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and related topics were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 interactions were related to psychological wellbeing and wellness, including utilizing chatbots for assistance and AI therapeutic apps.