How Swiss companies can leverage AI to improve cyber resilience

Today’s cyber threat landscape is more complex and urgent than ever. As emerging technologies like artificial intelligence (AI) create positive impact for businesses, cyber criminals and bad actors are employing those same technologies to enact more sophisticated attacks. AI is rapidly evolving in its use on both sides of the cyber security struggle.

Companies that fail to explore AI’s potential to bolster their cyber security measures may find themselves vulnerable, as well as less resilient when incidents do occur. But as new regulatory measures emerge and AI adoption presents new and greater risks, how can those who do embrace it do so responsibly?

In this article, we will explore the cyber risks AI can create, how it can create cyber security advantages, and how Swiss companies in particular can leverage AI for cyber security in order to improve resilience.

How AI complicates cyber security

AI is not merely a buzzword; it has become a tool for both cyber criminals and defenders. On one hand, malicious actors leverage AI to enhance their attacks, employing it in sophisticated phishing campaigns and targeted assaults that are increasingly difficult to detect and combat. These AI-driven attacks can dynamically adapt to countermeasures, making traditional security protocols less effective. They are also often far more targeted, leveraging information about specific individuals that can be used to entice, trick, and even imitate them.

Moreover, as companies adopt AI technologies internally, they inadvertently expand their cyber threat landscape as they increase their tech surface area. Furthermore, the AI tools themselves can become vulnerable through flawed algorithms and models. Traditional cybersecurity measures struggle to address AI empowered technology because of this added complexity.

Regulatory challenges around AI and cyber security

The regulatory environment surrounding AI is also evolving rapidly, compelling organisations to develop internal expertise and operational processes to ensure compliance. As it relates to cyber security, AI usage must be evaluated through the lens of regulation such as DORA in the EU or FISMA in the US, especially as it relates to resilience measures.

AI specific legislation is also on the rise. The AI Act and the AI Liability Directive are two examples in Europe alone of legislators attempting to combat the growing risk AI presents, emphasising the importance of transparency, accountability and data protection in AI powered applications.

But it’s not just AI specific regulation that impacts the use of AI; data protection and data security are called into question as well. GDPR, the Swiss FADP, and most other pieces of data protection regulation predate the wide adoption of AI, meaning these regulations must be adapted to evaluate how data is used in AI until newer, more AI-relevant regulations take their place.

Whilst much of this regulation is specific to the EU or other markets, Swiss companies should closely monitor the implications of these relevant regulations. EU regulation in particular often sets the regulatory standard globally, and measures introduced in the above acts do generally help improve resilience and operational excellence in companies that adopt them intrinsically.

AI as a cyber security ally

Despite external threats and regulatory challenges, AI presents significant opportunities for enhancing cybersecurity measures. One of its primary advantages is its ability to support Security Operation Centres (SOCs) with threat analysis, monitoring and incident response. AI powered technology can almost always handle monitoring and log management more efficiently and thoroughly than cyber security and IT teams. AI systems can analyse vast amounts of data in real time, identifying anomalies and potential threats that may go unnoticed by human operators.

Additionally, AI can help streamline third-party risk management (TPRM) and procurement processes. By automating evaluations, companies can reduce the time and resources needed to ensure compliance with both internal and regulatory standards. AI can also help bridge the gap in internal expertise regarding new regulations and overall compliance operations, facilitating smoother transitions to meet evolving legal requirements.

Finally, just as AI is often leveraged by cyber criminals to generate more sophisticated and targeted attacks, AI and machine learning (ML) can also be used to enhance red team exercises and cyber incident simulations in order to identify vulnerabilities and improve resilience.

Dieter Künzli

AI-driven cyber resilience empowers businesses to stay ahead of threats, ensuring robust protection and swift response.

Dieter Künzli Executive Director

Practical advice for implementing AI in cyber security

For Swiss companies looking to adopt AI for cyber security, here are three pieces of practical advice:

1. Focus on critical areas

Deploy AI solutions in areas where scaling human resources is particularly challenging, such as monitoring, log management, and procurement evaluation. This allows for more efficient resource allocation and enhanced security measures where they are most needed.

This also means only implementing AI solutions where it is strictly necessary and will demonstrably improve outcomes. Use a risk-based approach to determine where the risk AI creates is worth the value, and where it’s not.

2. Educate the workforce about AI

As the workforce adopts AI for various purposes, ensure AI literacy is promoted at all levels. This means making sure they can critically engage with AI tools and products in order to identify, evaluate and improve their output. The ability to sense-check AI outputs and strategies ensures that decisions are based on accurate interpretations and sound methodologies.

This includes at the board level; remember that the board will need to make decisions surrounding both cyber security and AI, so they should be well and continuously informed on both.

3. Prioritise retention of relevant expertise

In order to effectively evaluate, regulate, and manage AI and cyber security technology, businesses will need access to relevant expertise. This includes knowledge of emerging technologies and methodologies, comprehensive understanding of new and existing regulations, and a practical understanding of how to implement and manage these technologies within the organisation.

If retaining that expertise internally is not practical (and for most businesses it won’t be), businesses can work hand in hand with advisory partners who do retain specialists in these areas. A strong advisory partnership can help fill any knowledge gaps, providing a comprehensive approach to integrating AI into cyber security frameworks.

Embracing AI is an inevitability, but it must be done intelligently

As Swiss companies navigate the complexities of the cyber threat landscape, the integration of AI into their cyber security strategies will be paramount. Whilst AI introduces new challenges and regulatory pressures, it can also help improve resilience against ever-evolving threats. By understanding the dual role AI plays in cyber security – both as a potential risk and a valuable ally – companies can better prepare themselves for the future. Embracing AI is not just a matter of keeping pace with technological advancements; it is essential for ensuring resilience in an increasingly perilous digital environment.
 

Want to know more?