British Tech Firms and Child Safety Officials to Examine AI's Capability to Create Abuse Images
Tech firms and child protection organizations will be granted permission to assess whether artificial intelligence systems can produce child abuse images under new UK legislation.
Significant Increase in AI-Generated Illegal Content
The declaration coincided with findings from a safety monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the changes, the authorities will allow approved AI companies and child safety groups to examine AI systems – the foundational systems for chatbots and image generators – and verify they have adequate protective measures to stop them from producing depictions of child exploitation.
"Ultimately about preventing abuse before it happens," declared Kanishka Narayan, adding: "Experts, under strict protocols, can now identify the danger in AI models promptly."
Tackling Legal Challenges
The amendments have been introduced because it is against the law to create and own CSAM, meaning that AI developers and other parties cannot create such content as part of a testing regime. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This legislation is aimed at averting that problem by enabling to halt the creation of those images at source.
Legislative Structure
The changes are being added by the authorities as revisions to the criminal justice legislation, which is also establishing a prohibition on possessing, producing or distributing AI models developed to generate child sexual abuse material.
Practical Consequences
This recently, the minister visited the London headquarters of a children's helpline and heard a mock-up call to counsellors featuring a report of AI-based exploitation. The interaction depicted a adolescent requesting help after being blackmailed using a sexualised deepfake of himself, created using AI.
"When I learn about children experiencing blackmail online, it is a cause of intense frustration in me and rightful anger amongst families," he stated.
Concerning Statistics
A leading internet monitoring foundation stated that instances of AI-generated abuse content – such as webpages that may include multiple images – had significantly increased so far this year.
Cases of category A content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
- Girls were overwhelmingly victimized, accounting for 94% of prohibited AI images in 2025
- Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "represent a crucial step to guarantee AI products are secure before they are launched," stated the chief executive of the online safety foundation.
"Artificial intelligence systems have made it so victims can be targeted repeatedly with just a few clicks, providing offenders the capability to create potentially endless quantities of sophisticated, lifelike exploitative content," she added. "Material which additionally exploits survivors' suffering, and makes children, especially girls, more vulnerable both online and offline."
Support Interaction Data
The children's helpline also published information of counselling interactions where AI has been referenced. AI-related harms mentioned in the sessions include:
- Using AI to evaluate weight, body and looks
- Chatbots dissuading young people from consulting safe guardians about abuse
- Facing harassment online with AI-generated material
- Online extortion using AI-faked images
Between April and September this year, the helpline conducted 367 support sessions where AI, chatbots and associated terms were mentioned, significantly more as many as in the same period last year.
Half of the references of AI in the 2025 interactions were related to mental health and wellbeing, including using chatbots for assistance and AI therapy applications.