Rise in AI-generated child sexual imagery reports 20-Feb 19:41

What happened this week

Spain stepped up action against the spread of AI- generated child sexual imagery, ordering prosecutors to investigate X, Meta and TikTok over allegations that their platforms may be amplifying illegal material. The move, which followed a technical report warning that algorithms may be amplifying illegal material, came as Ireland’s Data Protection Commission opened a separate probe into X’s AI chatbot Grok for its potential to generate sexualised images, including of minors.

The move follows the release of new data from the Internet Watch Foundation (IWF) last month, which warned fast-improving AI tools are enabling offenders to create highly realistic abuse imagery at scale.

What the numbers show

Actionable reports of AI-generated child sexual abuse imagery, which refers to global cases confirmed as illegal and serious enough to prompt a takedown or referral to authorities, more than doubled over the past two years, according to the IWF.

The increase suggests not only that more synthetic imagery is surfacing online, but that more cases now require intervention, increasing pressure on platforms and regulators.

The IWF reported 3,440 AI-generated child sexual abuse videos in 2025, up from just 13 in 2024. Analysts also flagged a rise in overall confirmed abuse content to 312,030 reports in 2024.


UNICEF, meanwhile, highlighted in a recent report that at least 1.2 million children across 11 countries have had their images manipulated into explicit deepfakes in the past year.


What's next


Regulations are moving quickly. The UK plans to outlaw AI tools used to create such content, while international agencies such as UNICEF and Childlight continue to press for tighter laws, stronger detection systems and improved safety-by-design standards to address what they describe as an escalating threat.