
For most, AI image generation has been nothing more than a way to experiment and wonder at a powerful new tool, creating images that are often pretty or amusing. However, underneath this powerful tool is something sinister. The intersection of AI generation and sexual abuse is being exploited by offenders who are using it to create and distribute malicious content. According to the UK National Police Lead for AI, the largest criminal use of AI is by offenders creating child sexual abuse (CSA). The Preventing Child Abuse Conference took place on the 24th of June, and one of the impressive speakers in attendance was Hannah Swirsky from the Internet Watch Foundation (IWF), who detailed how the Internet Watch Foundation is devoted to stopping this harmful practice.
Hannah Swirsky is the Head of Policy and Public Affairs at the Internet Watch Foundation (IWF). Her keynote at The Preventing Child Abuse Conference was eye-opening. With the AI boom we’ve seen in the last year, the generation of CSA is something that has unfortunately flown under the radar.
Established in 1996, the IWF is a not-for-profit organisation whose mission is to eliminate child sexual abuse imagery online. We work closely in partnership with the internet industry, law enforcement, and governments globally to detect, remove, and block illegal child sexual abuse material (CSAM) from the internet.
The role of AI
Hannah started by reaffirming how overall AI has played a positive role in our lives. It has helped us to communicate and work more effectively. The way that machine learning has exploded in the last few years has led to new and improved tools constantly being released. Yet over the last couple of years AI has been utilised by bad actors to spread malicious content, which has been a real challenge for the IWF.
“We’ve seen a rapid development in how offenders are using AI to generate CSA. And according to the UK National Police Lead for AI, the largest criminal use of AI is by offenders who are creating CSA“ – Hannah Swirsky
The IWF have stated that the worst way that this is happening is through text to image generators, where people input text prompts to create new imagery. It is worth noting here that these criminals are less likely to be operating through mainstream AI tools, which have got strict safety systems in place to block attempts to generate sexual or violent content. For the most part, these restrictors work well. It is through the use of open-source models (that can be downloaded and run locally) where there is less regulation, that these images are being created. There are dark web forums where criminals are sharing ‘jailbroken’ models and even pre-trained generators that are ready and waiting to create harmful content. It can be as easy as finding the generator, and then with a few words criminals can begin creating CSA.
In her keynote, Swirsky highlighted how photorealistic AI images have gotten through displaying three images in her presentation. Of the three images, two were of children in candid shots that could have been taken on a phone. One of them was AI generated and to the untrained eye, it was indiscernible. Through this exercise, the audience was able to grasp just how real this threat is.

They have noticed this massively within the IWF. In 2023, the organisation noticed a few AI images, but they were easy to point out due to how the AI generation was a new technology and made a lot of mistakes. Now, only 2 years later, these images are photorealistic, almost impossible in some instances to discern from real ones. They have also seen an emergence of AI videos, as the technology has come on leaps and bounds. Video footage will lead to even more harmful content being generated, something that Swirsky and the IWF are deeply concerned about.
A large portion of the material that IWF analysts review features known victims or famous children. Imagery of known victims is being used to train these models, which can then be adapted by the AI into other people, or to create different content of the same victim. The IWF have also noted that this is a highly gendered operation, with about 98% of the AI CSA content depicting girls.
Swirsky highlights how important this fact is, especially when you take into account the government’s pledge to tackle violence against women and girls. The IWF believes that efforts to tackle this image generation needs to be included in that pledge, as it is extremely upsetting, damaging and degrading. Just as we hosted this event in June, we are hosting the Tackling Violence Against Women and Girls Conference on the 25th of November. Committed change needs to happen, and by bringing together industry leaders and frontline professionals, we can make a difference.
The Use of AI in Sexual Extortion
Another issue that is related to this one is the use of AI images in sexual extortion. The IWF are aware of cases where offenders have gotten images of children and used AI to generate nude images of them, they then contact the child and threaten to leak the photo to classmates if they don’t send ‘X’ amount of money. One of the most difficult aspects of this is how difficult it is to monitor and halt these offenders. This type of extortion is traumatising for children and can be extremely damaging to school communities. A lot of children will be afraid to approach their parents about it.
The IWF is actively trying to make the possession of the digital models that generate CSA illegal – meaning the models that have had their safety protocols removed or that have been trained on CSA. At the moment it is not illegal to possess them. Alongside this, Swirsky detailed how the Internet Watch Foundation wants to ban the use of ‘nudify’ apps in the UK. These apps are easy to find if you look, they can make it look like the clothing has been removed from anyone in the photo. They can also deepfake people’s faces over explicit images. While it is illegal to create AI generated child sexual abuse material. There is currently no legislation which specifically addresses these harmful apps in the UK.
AI Chatbots/Personas
Another avenue being exploited is AI chatbots, where abusers are using AI to assimilate sexual communications with a child, where the chatbot will imitate a child, mimicking the way a child would speak or act. They can even be trained to encourage or provide instructions to offenders who use them. This activity can be a way for offenders to practice grooming strategies in a safe environment. What’s more, it can reinforce and normalise predatory behaviour, making people more likely to attempt it with real children.
There is real evidence of AI chatbots negatively affecting people. In 2021, a man tried to attack Queen Elizabeth II with a crossbow, after he gained encouragement from a female “AI friend”. In another example, a teenager in California died by suicide after months of discussing suicide with ChatGPT. Allegedly the chatbot helped him decide if different methods of suicide would work. This makes it clear that chatbots need to be installed with better guardrails around sensitive content and risky behaviour, whether it is being used by abusers or the vulnerable.
Some AI companies have become members of the IWF and are working closely alongside them to ensure that their AI software cannot be used by bad actors. The IWF has a keywords service where they list keywords that they know people use when creating this harmful content. This, alongside the hash lists mentioned earlier, indicate how tagging harmful content and creating harsher filters on these tools is a method that can work, but it needs more development and further installation. However, Swirsky knows that there is such a wide range of AI tools and so much of it is unregulated that there is still a lot of work to do.
Throughout this illuminating keynote, you could see just how passionate Hannah Swirsky and the Internet Watch Foundation are about the protection of children and the stopping of online child sexual abuse. With AI, you never know what is just around the corner, as this technology is very experimental and is evolving at an alarming rate. Authorities need to be able to keep up with the constant changes, otherwise it leaves room for people to be hurt and abusers to benefit.
We’d like to thank Hannah Swirsky for her powerful session. We are committed to highlighting inspirational people and groups who are fighting to protect people from harm. Our upcoming The Tackling Violence Against Women and Girls Conference in November will be another opportunity to hear from industry leaders and experts through keynote sessions and case studies.
Register FREE to access 2 more articles
We hope you’ve enjoyed your first article on GE Insights. To access 2 more articles for free, register now to join the Government Events community.
(Use discount code CPWR50)