At our recent VIP roundtable event in Munich, hosted by Florian Felsenreich (Executive Search Director Europe at Annapurna) and James Culff (Cyber-Security Practice Lead EMEA at Annapurna), we brought together 15 senior security leaders to discuss one pressing topic: ‘End-to-End Security of AI Integration’.
The discussion was practical and built on real experiences, rather than theoretical thinking. The room was focused on what is actually happening inside organisations today and where security leaders need to be paying attention as AI adoption becomes more common.
The conversation led to three key takeaways:
AI Attacks: More Evolution Than Revolution
As AI becomes more advanced and accessible, there’s a growing narrative that it will dramatically reshape the cyber threat landscape. However, the leaders in the room did not agree with this way of thinking as they’re yet to see the technology create a wave of entirely new AI-driven attacks.
Instead, the leaders in the room highlighted that the only real-time effect they’ve seen AI have on cyber threats so far is the speed at which they’re performed.
Yes, attackers may now operate faster. Yes, phishing messages are becoming more convincing. But these are improvements in speed and scale, not fundamentally new types of threats.
The more immediate and realistic concern discussed was prompt-based manipulation of large-language models (LLMs).
Unlike traditional hacking attempts, these attacks do not always involve breaking into systems. Instead, they involve carefully crafted text inputs to manipulate an AI system into revealing information or performing unintended actions.
The group agreed that this is where organisations need to focus their attention today, rather than fearing artificial general intelligence (AGI). Deep dive into how you’re managing your AI systems response to inputs right now.
The overarching theme was that the threat is not that AI becomes autonomous and hostile, but instead that it gets to a point where it behaves unpredictably when poorly governed.
AI Needs To Be Treated Like Any Other Software
Another strong theme throughout the discussion was that AI systems must be treated like any other enterprise software product.
They still require ownership. security reviews, testing, monitoring, and clear accountability with remediation plans in place. However, AI introduces additional complexities from the ‘everyday’ software platforms currently well-used in organisations, and that is that AI does not always act in predictable ways.
Due to this, the group stressed the importance of sandbox environments. AI tools should be tested thoroughly in controlled spaces before being deployed into live workflows or connected to sensitive data.
Early experimentation without guardrails is the biggest risk to our cyber networks.
The conversation also moved beyond the technical layer to governance at the highest level. Several attendees made it clear that AI implementation is not simply an IT or security decision. It is a strategic business decision with reputational, operational, and regulatory implications. That means visibility at the board level is essential to achieve success in implementing and securing AI.
Works Council’s Must be Involved Early
With all of our attendees operating in European environments, an interesting insight was uncovered that perhaps is less well-known in the UK.
Our attendees emphasised the need for work councils to be involved in the integration of AI immediately, not as a last step.
When works councils were involved early in AI discussions, implementation moved smoothly, whereas those who left it till last hit hurdles.
This is because there’s an awful lot of legalities involved in the implementation of AI, and with regulations being stricter in the EU, having the work councils check over the compliance of said implementation immediately prevents any wrong steps being taken, resulting in a lot of unwinding later on.
At the same time, however, there is tension with adding works councils in early as organisations do not want security processes to move slowly. Competitiveness matters and moving too slowly can be risky.
That said, our attendees acknowledged the frustration of having legal involved early on and slowing processes down, but when governance is built in properly from the beginning, speed takes care of itself as it means less wrongdoing and therefore naturally quicker than being added as a corrective matter.
Trusted Responsible Search
If you’re looking to hire for your cybersecurity team, but are unsure where or how to find the best talent available, reach out to James Culff for a no-obligation chat.
Equally, if you’re an experienced cybersecurity professional and wondering if you’re ready for a new challenge, reach out to James to collaboratively explore your options.