
Technology changes rapidly, making it essential that we stay on top of online safety to keep the internet a safe and positive place for children. We recently hosted the Online Safety Conference 2026 and it was a massive success. We had over 100 people attend and a collection of brilliant speakers. One of these was Will Gardner OBE, the CEO of Childnet International with 26 years of experience at the children’s charity. He is also the co-director of the UK Safer Internet Centre. Will delivered an insightful keynote about protecting children and young people from online harm with a specific focus on children and Artificial Intelligence (AI). We’ve recapped his session below, believing that there’s a lot of value here.
Childnet is a children’s charity that just turned 30 years old. Their mission is to work with others to help make the internet a great and safe place for children. They work on the ground in schools and other settings with children, as well as parents and carers and professionals working with children, working to give young people the information and skills they need to use technology safely and responsibly, look after themselves, and look after others. Will started his session by quoting a 14-year-old who gave a speech at one of Childnet’s previous Safer Internet Day events.
“Every generation’s experience of youth is different, and ours is framed by social media”
Will believes that it is crucial to include youth voice when talking about the internet, whether thinking about education or policy, as it is the young people who are growing up with these technologies, and their experience is not the same as ours. Their voice can help us do our job better.
Childnet’s Safer Internet Day
Childnet organises the Safer Internet Day every year on the second Tuesday of February. Will told attendees to mark their calendars for the 9th of February 2027 and look to get involved next year, as the event brings together organisations from all sectors to get involved and amplify messages. The day works closely with schools, and the BBC are directly involved with a plethora of resources.
“As well as schools, we’ve got tech companies, government departments, police services, charities, teaching unions and football clubs getting involved. You know, online safety is relevant to everybody, and we try to mobilise as many organisations as we can to help support that reinforcement of messaging and those conversations in schools, with children, and at home with children.” – Will Gardner
The theme of Safer Internet Day this year was “Smart Tech, Safe Choices: Exploring the Safe and Responsible Use of AI”. Will’s pride in this event was apparent, especially when he highlighted the fact that 60% of the schools involved with the day said that it led to disclosures of potential online safeguarding issues from its students. So, the day clearly works, giving young people the confidence and understanding to raise potential issues. Creating environments where young people feel comfortable approaching support is really important, and it’s clear that Safer Internet Day does this. He also shared that Childnet will soon be releasing the data on the reach and impact of the day amongst children as well as parents and carers – keep an eye out on their pages.
For this year’s event, Childnet conducted a research report to explore key questions like: How young people are interacting with AI? How often? What tools are they using? What do they like about it? What do they use it for that’s positive? What are they concerned about?
How Children are Using AI
The research report was conducted by people aged 8-17 years-old. It is related to AI in the broadest sense, including generative AI like ChatGPT or Google Gemini, but also voice assistants like Siri on Apple devices or Alexa on Amazon Speakers. Here are some of the key statistics:

Overall, the report found that 97% of young people aged 8-17 have used AI in some form. This perhaps reflects just how increasingly integrated AI is into online services and platforms. You can see Gemini results when doing a Google search, it is embedded into social media, with MyAi on Snapchat for example, as well as messaging apps like Whatsapp utilising Meta AI.
“From some of our focus group work with children we heard some voices saying it’s harder to go online without using it… We need to make sure that we are ready to talk to young people about this as a topic.” – Will Gardner
The research showed that 80% of children have seen AI being used positively, and 74% have learnt something new from AI. More than half said it makes their life better, and 48% said it is an important part of their everyday life. It has numerous uses, but from the focus group work we can see that AI is also used in young people’s social interactions sometimes.
“People use it when they need advice on stuff – like if they don’t know how to write a message back to their friends if they have had a conflict in a friend group. Lots of people use it to help them word things or approach difficult situations where people are arguing” – Girl aged 18
AI as a Tool for Learning
The main use of AI by children is for studying. 73% of the people in the report believed that AI is a helpful tool for studying, whether that involves further research into areas covered in school, satisfying their own curiosity or asking AI to go over topics again that they might not feel confident in yet. Conversing with an AI chatbot can be far more engaging than normal research, as you can ask specific questions and have conversations with it. Another great element is how accessible AI is. You can use most of these tools for free. One thing to bear in mind with using AI to learn or study is that it is not always 100% accurate and can make mistakes, something that ChatGPT declares front and centre when starting a prompt. It is important that children and young people understand this and back up any new information they get from AI tools with their own knowledge or research.
One rising risk regarding AI and education is that young people have begun to use AI to do schoolwork for them. Mostly for homework tasks. This is obviously a big challenge, and schools need to adapt to this, as homework is very important in helping students absorb information. Also 53% of children are worried that their school may think that they have used Ai for their work when they haven’t, with young people worried that their work might appear to be similar to an AI response. Young people also had concerns about the impact of AI on their own capabilities. If you are going to write a report or essay, and you are going to research what to discuss, it is so easy to ask AI to give you ideas on what to talk about, that you end up not using the part of your brain that would explore the topic, coming up with your own points and thinking creatively. This is known as cognitive offloading, and the research Will described found that many young people themselves were worried about the impact of AI on their own ability to research and think creativity.
Companionship
One area that Childnet’s research covered was how AI is being used for companionship. 64% of young people said that AI is the best way to get advice quickly. However, 14% of the young people they surveyed were using AI to talk about things that they didn’t feel able to talk to anyone else about. You can imagine how AI can be a source of non-judgmental advice, being easy to approach, non judgmental, etc. There are risks to this, when the technology has not been designed for this, and there have been instances where advice from AI has caused harm. Although 41% of 8-17 year-olds think that AI can be a great source of emotional support, Will also shared that the research found that:
“45% of young people are worried that people their age are getting really close to AI like it’s a friend. And there’s been a lot of discussion about companion apps and apps giving medical, mental health advice, and fulfilling a function that it’s not designed for… The responses it gives are more like what it thinks you want to hear rather than what you need to hear in particular situations.” – Will Gardner
We’re proud of the meaningful discussions our conferences generate, just like this one did. It is through fantastic speakers like Will that we are able to create impact and lead changes across the UK. We have a collection of upcoming events, covering everything from universal credit to mental health in the workplace. We’ve got something for everyone. Find out more about our upcoming conferences and use code NEWS15 to get 15% off as thanks for reading this article.
AI Image Manipulation
A major concern for children, and also for parents, was around image manipulation using AI. 60% of children are worried about someone using AI to make an inappropriate picture of them, and just over 12% of teens and 14% of 8-12s have seen someone their age using AI in this way. This is something we need to be aware of. Later in his keynote, Will shared about some guidance currently being developed which is going to provide advice to education settings not to include faces of children in photos on their school websites or social media.
How Children are Learning about AI
Will shared with the audience some key resources they could use to have these important conversations about AI and its safe and responsible use. During his presentation he highlighted some of the tools which are free and had been developed for Safer Internet Day and would be appropriate for use at any time of year and are for educators working with children of all ages, 3-18. For 3-7 year-olds, a lot of the resources involve teaching young people to understand that technology is a distinctly different thing to the real world and teaching and understanding about healthy relationships with computers and AI.
From ages 7-11, the learning becomes more focused on teaching what is okay and not okay with technology. Will shared examples of it being okay to use AI to tell you a fact about your favourite animal, but not okay to use it to tell a mean joke about your friend. He also shared an excerpt from a true or false game, which could be played in class, where the question was if an AI chatbot has emotions.
In 11-14 year olds, AI adoption is explored more deeply, so young people can identify harmful uses of AI. By using real world examples, this age group begins to understand how AI and technology in general can be harmful to themselves and the people around them if used incorrectly. In the classroom, it enables discourse on these topics, widening understanding amongst children.
In 14-18 year olds, the issues move more theoretical, with more discussions surrounding the ethics of AI, exploring biases, dependency, creativity and inspiration. With AI being such a contested topic, it can lead to some really valuable and interesting conversations between young people, letting them navigate the issues themselves with the guidance of a teacher and learning the pros and cons of the tools that are becoming so commonplace.
These resources are pivoted towards both classrooms and the home, with Will encouraging parents to take these discussions home with them, letting both parent and child learn about AI tools and creating an environment at home where children feel comfortable discussing technology and how it is affecting them, both positively and negatively.
“Ensuring that you are aware of the ways in which the young people you know are using this software can go a long way in helping make it easier for them to come to you if they have a concern or a problem.” – Will Gardner
He further highlighted that there must not be a blame culture here. This technology is so new that people from all generations are still coming to terms with it, and there is an opportunity for all of us to share and support each other in staying safe and using this technology responsibly. The most important thing is staying safe and learning how we can use this technology for good and make it safe for all to use.
We’d like to thank Will Gardner for his incredibly interesting and moving session. AI is becoming ingrained in our daily lives and it’s great to see more awareness on both the good and the bad aspects of the tool. It could be easy to look at the more extreme negatives of AI and decide that it is wholly bad, but that is not the case. AI can be extremely useful in daily life to automate tasks and help with learning. In more advanced settings, AI is making strides in environments like healthcare, transport, and science, hailing a new era. Moving forward, it’s important to take home the importance of staying aware, as adults, of the ways in which young people are using AI.
Register FREE to access 2 more articles
We hope you’ve enjoyed your first article on GE Insights. To access 2 more articles for free, register now to join the Government Events community.
(Use discount code CPWR50)