The number of AI-generated child sexual abuse videos online has surged dramatically, as predators exploit rapid advances in artificial intelligence.
According to the Internet Watch Foundation (IWF), such videos have now reached a level of realism that makes them nearly indistinguishable from actual footage, with a sharp rise in cases seen this year.
In the first half of 2025 alone, the UK-based internet safety group identified 1,286 illegal AI-generated videos containing child sexual abuse material (CSAM), a staggering increase from just two during the same period in 2024.
More than 1,000 of these videos involved Category A abuse – the most extreme and graphic form of CSAM.
The IWF attributed this disturbing trend to the booming AI industry, which has seen billions invested in developing accessible video-generation tools now being repurposed by offenders. “There’s intense competition and massive investment in AI,” one IWF analyst noted, “so sadly, perpetrators have plenty of options.”
This spike in AI-made abuse material was part of a broader 400% increase in web addresses hosting such content. Between January and June 2025, the IWF received reports of 210 URLs featuring AI-generated CSAM, up from 42 the previous year. Each site often contained hundreds of images, including a growing number of videos.
On dark web forums, predators openly discussed the rapid evolution of AI, with one user boasting about mastering one tool only to find “something newer and better” available shortly after.
IWF experts explained that many of the videos were produced by modifying basic, open-source AI models using existing CSAM to train them – in some cases, with only a few sample videos.
Worryingly, the most realistic AI abuse videos this year were created using images of real victims, the watchdog revealed.
Derek Ray-Hill, interim CEO of the IWF, warned that the increasing sophistication and availability of AI, combined with its ease of misuse, could spark an overwhelming surge in AI-generated CSAM. “There is a serious risk of an explosion of this content flooding the open internet,” he said, adding that this could fuel wider criminal networks involved in trafficking, abuse, and modern slavery.
By using the likenesses of existing victims, offenders are dramatically increasing the spread of CSAM without needing to harm new individuals, he added.
In response, the UK government has moved to ban the creation, possession, and distribution of AI tools used to generate abuse content. Under new legislation, offenders could face up to five years in prison.
Possession of guides or manuals teaching people how to use AI for creating such material or for facilitating child abuse will also be criminalised, with violators facing up to three years in jail.
Home Secretary Yvette Cooper, announcing the new laws in February, stressed the importance of tackling child abuse in both the online and offline world.
AI-generated CSAM is already illegal under the Protection of Children Act 1978, which bans the creation and distribution of “indecent photographs or pseudo-photographs” of children.