
We recently hosted the Cyber and AI Preparedness Free Webinar, which brought together 3 incredible speakers from different industries. The webinar saw a lively discussion between Isabel Scavetta, a multi-award-winning cyber and diversity advocate; Matthew Forshaw, a senior advisor for skills at The Alan Turing Institute and AI researcher at Newcastle University; and James Tumbridge, an intellectual property and data regulation lawyer at Keystone Law who also chairs digital services for the City of London Corporation.
As AI rapidly reshapes every industry, the UK’s public sector faces a critical moment. How can public bodies embrace innovation without exposing themselves to new risks? AI offers incredible potential, but how do we ensure that the workforce is prepared for this revolution?
Together, they explored these questions, and the insights that follow are the core topics discussed during the event.
AI Integration: Balancing Opportunity and Risk
Matthew began by highlighting how AI is already embedded in many public systems, often invisibly. Automation has become part of daily operations, from document processing to citizen-facing chatbots. This embedding creates a new layer of complexity, and Matthew believes it is important that people become aware of the risks and be proactive in ensuring their own security and that the AI tools they use are also secure.
Indeed, it raises issues of data protection, bias, and accountability. The public sector, unlike private enterprise, carries unique legal and ethical duties. For example, if an AI model is trained on data from the town of Brighton and then is reused by another city like Manchester, it risks breaching equality laws if it results in demographic bias.
James added that the City of London has responded by publishing a Generative AI Policy: a question-based guide that helps teams make responsible choices without banning innovation outright. He framed it as keeping “humans in the loop” – ensuring AI assists decision-makers rather than replacing them.
The pair agreed that understanding is the first step to cybersecurity, and with new tools constantly changing and releasing, that understanding can unfortunately be lost.
Employees don’t need to become data scientists, but they do need to know how and why their tools make decisions. Without that literacy, efficiency gains can turn into governance disasters.
The Changing Face of Cyber Risk
Matthew began by highlighting how AI is already embedded in many public systems, often invisibly. Automation has become part of daily operations, from document processing to citizen-facing chatbots. This embedding creates a new layer of complexity, and Matthew believes it is important that people become aware of the risks and be proactive in ensuring their own security and that the AI tools they use are also secure.
Indeed, it raises issues of data protection, bias, and accountability. The public sector, unlike private enterprise, carries unique legal and ethical duties. For example, if an AI model is trained on data from the town of Brighton and then is reused by another city like Manchester, it risks breaching equality laws if it results in demographic bias.
James added that the City of London has responded by publishing a Generative AI Policy: a question-based guide that helps teams make responsible choices without banning innovation outright. He framed it as keeping “humans in the loop” – ensuring AI assists decision-makers rather than replacing them.
The pair agreed that understanding is the first step to cybersecurity, and with new tools constantly changing and releasing, that understanding can unfortunately be lost.
Employees don’t need to become data scientists, but they do need to know how and why their tools make decisions. Without that literacy, efficiency gains can turn into governance disasters.
The Changing Face of Cyber Risk
James warned that “encryption and general security settings need to evolve to keep pace with AI, and standard settings will not be enough in future to have faith your system is secure.”
Attackers can now automate and scale intrusion attempts, exploiting systems faster than traditional defences can respond. James urges that companies ensure their defences are evolving alongside the threats.
Isabel added that phishing has evolved dramatically: what used to be crude, typo-riddled emails are now polished, AI-crafted messages indistinguishable from legitimate communication.
And while deepfakes grab headlines, Matthew emphasised the subtler threat: AI’s role in scaling everyday fraud. “Those phishing emails can now be far more subtle,” he noted, because attackers can automate a lot of the processes. They can create realistic, personalised emails quickly that look professional and can even use information like when you started working at that company and the names of your immediate managers. They can get this information from platforms like LinkedIn.
Therefore, staying vigilant is essential to protect yourself and your company from these new risks. Ensuring that you are confident in the emails you are responding to and are picking up on any inconsistencies – it’s always better to double-check.
Both speakers linked these trends to a growing skills gap in public service cybersecurity teams, many of which are overstretched and under-resourced. As technology advances faster than awareness training, old “red flag” advice about spotting phishing or scams is quickly becoming obsolete.
Building a Digitally Confident Workforce
Perhaps the most urgent challenge discussed was the digital literacy gap across the public workforce.
Matthew raised research from the Alan Turing Institute and Lloyds Bank that uncovered how half of UK adults lack essential digital skills (such as managing passwords or safely accessing online payslips). This gap in literacy/confidence can be exploited, as those people are more vulnerable due to a lack of understanding.
“I think some of it is about the comfort level mentally with embracing technology and some of it does flow from how they started life and what their home environment was like.” – James Tumbridge
To close the gap, Matthew advocated for companies to look at training staff in these essential digital skills. He also stressed empathy: employees exist on a spectrum between overconfidence and scepticism. Training should help both groups find a balanced middle ground – understanding enough to embrace AI, but cautious enough to spot its limitations.
Understanding will lead to people being more confident and efficient, and it will also open them up to the potential risk factors in the modern digital world.
Inclusion and Opportunity: Who Gets to Shape the Future?
Isabel steered the conversation towards diversity and inclusion, an area in which she has extensive experience advocating.
She wanted to highlight how AI tools have made it easier to access learning materials. Anyone who is curious can explore and discuss any complex topics. This is something worth noting – it is easy to tunnel vision on the dangers of AI, but it is also important to celebrate the ways that it can enrich our lives and make practices easier.
Matthew reinforced that inclusivity must also extend to how systems are designed. From his own experience, a company that installed an AI note-taker ran into issues:
“It was only post-deployment that it was realised there were significant issues with minority groups and regional accents where individuals from those groups were incorrectly transcribed within meeting notes.” – Matthew Forshaw
Because developers hadn’t tested widely enough, the tool introduced bias. An avoidable failure of design diversity and quality assurance. This error muddies communications between teams, whilst also alienating certain groups. These AI tools need to be inclusive and accessible to all; otherwise they create divides in teams and make individuals feel unincluded.
James noted from his experience working with schools and charities, digital illiteracy and a lack of confidence could stem from economic challenges more than challenges of diversity. Addressing the financial inequalities in education is an excellent way of bridging this confidence gap as just like how some children naturally find maths easier than English, some people naturally understand computers better, and a lot of this might stem from lack of exposure to technology and the digital world throughout their lives.
This prompted a key takeaway: AI must be tested by, and for, the full spectrum of its users. Otherwise, it risks amplifying the inequalities it promises to solve.
Governance and the Art of “Future-Proofing”
When asked how to future-proof AI strategies, James said, “The pedant lawyer in me has to start by defining ‘strategy,’ but in his opinion, what really matters are the policies and procedures underneath. These must be flexible enough to adapt as technology evolves.
His advice: create short, accessible guidance built around critical thinking rather than rigid rules. The City of London’s six-page AI policy, for example, poses reflective questions rather than prescribing hard limits.
That adaptability is what makes it resilient. Innovation through AI tools has the potential to rewrite business practice and transform workflows. Being too strict with policy and procedures would undermine this. Ensuring there is a healthy balance between signposted guidance and room to experiment is key in creating meaningful adaptations and innovations.
This free webinar was held as an introduction to what to expect from The Public Sector Cyber Security Conference 2026. Being held at the prestigious Queen Elizabeth II Centre in Westminster, London, this free to attend conference is set to be our biggest event yet. Find out more about the impressive line-up of speakers and sponsors here.
Responsible Use Starts with Understanding
Many AI systems will tell you what you want to hear in order to impress you and keep you on their platform. Machine learning practice means that they are improving constantly. This means that even if they don’t know the answer to your question, AI will give you the best answer it can produce.
For Isabel, this highlights the need for product and data literacy across all levels of government. Employees must understand not just what a tool does, but why it exists. Free AI tools aren’t free; they’re often funded by harvesting user data. When staff paste sensitive information into unregulated platforms, they may be unknowingly training external models with government data.
Matthew proposed using competency frameworks to map where AI upskilling will have the greatest impact. This allows departments to invest strategically – developing both technical and ethical expertise. It also creates a shared language between universities, employers, and policymakers.
Many employees might not understand the danger that inputting sensitive data into AI could pose, potentially leading to that data being accessible to cyber criminals who can use it in ransomware attacks. Additionally, it could lead to things like GDPR being broken or people’s personal information being leaked.
This information was prepared for our yearly Public Sector Cyber Security Conference. You can find out more about next years event here:
The Double-Edged Sword: AI as Protector and Attacker
Toward the end of the session, James flipped the narrative: while AI processes can make business more secure, through processes like automated system monitoring and threat detecting, it can also empower attackers
The cost of cybercrime is dropping because AI tools make complex attacks easier to launch. Indeed, in a world where AI is constantly learning and improving, it almost becomes an arms race – each side getting more responsive and finding new risks/exploits constantly.
He urged leaders not to draw an artificial line between digital and physical security.
“People put QR codes on public payment meters for parking, and then all of a sudden the local council’s not receiving money, people are paying, but the bad guys are collecting your parking payments. It requires a physical element.” – James Tumbridge
A physical element in that example would be someone from the council routinely checking that the QR codes haven’t been replaced. In a business setting, it might be ensuring that whilst AI is routinely monitoring your systems, you incorporate a human check every now and again.
Education, Data, and Healthcare: The Next Frontiers
During the audience Q&A, three domains dominated: education, data usage, and healthcare.
In education, Matthew discussed universities’ struggle to adapt assessments in the age of generative AI. Should they ban its use or embrace it as a tool? Either way, fairness issues arise – particularly if wealthier students can afford premium AI tools like ChatGPT.
Healthcare remains the most promising but complex application area. The NHS has got one of the world’s most comprehensive health records, with the entire UK population from birth to death in their system. James noted that this cradle-to-grave data offers “gold dust” for medical research, but anonymising and managing it safely remains difficult. He warned that misunderstanding the distinction between medical research consent and data protection consent can create legal risk.
Despite these challenges, the speakers agreed that AI-driven healthcare could deliver immense public value – if governed with transparency and human oversight.
Key Takeaways: Human in the Loop
By the end, the panel’s central message was clear: AI is neither saviour nor villain – it’s a tool that must be used with understanding.
Across cybersecurity, workforce planning, inclusion, and governance, the “human in the loop” remains essential. The public sector’s success won’t depend on how much AI it adopts, but on how wisely it integrates it into human systems of trust, accountability, and service.
As Isabel concluded, the journey from awareness to implementation continues at the Public Sector Cybersecurity Conference in February 2026. The sector has much to gain – but only if it leads with a strong mixture of curiosity and caution.
We’d like to thank Isabel Scavetta, Matthew Forshaw and James Tumbridge for their incredibly engaging session.
The Public Sector Cyber Security Conference returns next February, register your interest to hear from industry leaders across healthcare, local government and more. Find out more here.
Register FREE to access 2 more articles
We hope you’ve enjoyed your first article on GE Insights. To access 2 more articles for free, register now to join the Government Events community.
(Use discount code CPWR50)