Articulate the Vision, Grow the Team, Deliver the Product

Reflections from the CW Agentic AI and Security Event

Reflections from the CW Agentic AI and Security Event

Recently (11th Nov 2025), I had the opportunity to attend the CW Agentic AI and Security event, kindly hosted by CGI in the iconic “Walkie Talkie” building in the City of London, UK. The event’s theme—securing our increasingly AI-driven digital infrastructure—could not be more relevant. As our societies become ever more reliant on digital systems, the challenge of protecting them from cyber threats grows in both complexity and urgency. The integration of AI into these systems introduces a new and rapidly evolving set of risks.

The Expanding Attack Surface

One of the central messages from the event was that the “attack surface”—the range of potential vulnerabilities—continues to expand. The addition of AI technologies only accelerates this trend, creating a new range of types of threat that we need to find ways to mitigate. The event’s speakers, expertly curated by the Security, Identity, Privacy and Trust SIG Champions, each brought a unique perspective to this critical topic.

Expert Insights on AI Security

Dr Madeline Cheah (谢涵馨) from Cambridge Consultants provided a comprehensive overview of threats both to and from AI systems. She highlighted AI specific attack vectors such as data poisoning and the introduction of backdoors during AI training. She also discussed autonomy and agency in AI systems, and risks that emerge when AI systems have increased agency, including risks from hallucinations and deceptive behaviours from AI systems, along with new risks when we embody AI in physical products.

Simon Thompson offered a contrasting viewpoint, likening AI agents to “Stuart Little”—suggesting that, for now, AI agents require significant guidance and oversight to deliver meaningful outcomes.

Colin Selfridge from CGI approached the topic from a trust and risk management angle, advocating for the application of established best practices such as adversarial testing, zero trust frameworks, and the integration of Governance, Risk, and Compliance (GRC) into AI security architectures.

Finally, Jonathon Wright from Eggplant (part of Keysight Technologies) provided perspective on the latest strategic technology trends in AI security, referencing Gartner research on innovations that can drive resilience and trust in an AI-powered, hyperconnected world. His live demonstration of multi-agent systems conducting security tests on the Cambridge Wireless website, using a Vibe Engineering Lifecycle, was a particular highlight.

Looking Ahead

The event provided valuable insights into how the nature of security threats is evolving alongside AI adoption. While the risks are significant—and the prospect of an unmanageable attack surface is a real concern—the discussions also highlighted the immense value and necessity of ongoing innovation in this space. As we continue to integrate AI into our digital infrastructure, there is a clear need – and massive value – for further and ongoing innovations in this space.

Comments

Leave a comment