UK Technology Firms and Child Protection Agencies to Test AI's Ability to Generate Abuse Images
Technology companies and child safety agencies will receive authority to assess whether artificial intelligence systems can produce child abuse images under new British laws.
Significant Increase in AI-Generated Illegal Content
The announcement coincided with revelations from a safety monitoring body showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the changes, the authorities will permit designated AI companies and child protection groups to inspect AI models – the underlying technology for chatbots and image generators – and verify they have adequate protective measures to prevent them from creating images of child exploitation.
"Fundamentally about preventing abuse before it occurs," stated the minister for AI and online safety, noting: "Experts, under rigorous conditions, can now identify the risk in AI models early."
Addressing Regulatory Obstacles
The changes have been implemented because it is illegal to create and own CSAM, meaning that AI developers and other parties cannot create such content as part of a evaluation process. Until now, authorities had to wait until AI-generated CSAM was published online before dealing with it.
This law is designed to preventing that issue by enabling to halt the creation of those images at source.
Legal Structure
The amendments are being introduced by the government as modifications to the criminal justice legislation, which is also implementing a prohibition on owning, creating or distributing AI systems designed to create child sexual abuse material.
Practical Consequences
This recently, the official visited the London headquarters of Childline and heard a simulated conversation to advisors featuring a account of AI-based exploitation. The call depicted a teenager requesting help after facing extortion using a explicit deepfake of himself, constructed using AI.
"When I learn about young people facing blackmail online, it is a source of extreme frustration in me and justified anger amongst parents," he stated.
Alarming Statistics
A prominent online safety organization reported that cases of AI-generated exploitation content – such as webpages that may include multiple files – had more than doubled so far this year.
Cases of category A content – the most serious form of abuse – rose from 2,621 visual files to 3,086.
- Female children were predominantly targeted, making up 94% of illegal AI images in 2025
- Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025
Industry Reaction
The law change could "represent a crucial step to guarantee AI tools are safe before they are launched," commented the chief executive of the online safety foundation.
"AI tools have enabled so survivors can be targeted repeatedly with just a few clicks, giving criminals the ability to create possibly endless quantities of sophisticated, lifelike child sexual abuse material," she added. "Content which additionally exploits survivors' trauma, and makes young people, particularly girls, less safe on and off line."
Support Session Data
The children's helpline also published information of counselling sessions where AI has been referenced. AI-related risks discussed in the sessions include:
- Using AI to evaluate body size, body and appearance
- AI assistants dissuading children from talking to safe adults about harm
- Being bullied online with AI-generated material
- Digital extortion using AI-faked pictures
Between April and September this year, the helpline conducted 367 support interactions where AI, conversational AI and associated terms were mentioned, four times as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, including utilizing chatbots for support and AI therapeutic apps.