AI is changing the risk landscape – why banks need to rethink their security models now

ByYannick Hänggi,Dr. David Bauder,Dr. Oliver Laitenberger
Time to read: 7 minutesTechnology Transformation, Article
Overview

In many companies, artificial intelligence (AI) is primarily viewed as a productivity driver: faster analyses, better process support, higher quality in knowledge work, and greater scalability in development, service, and sales. However, this perspective alone falls short. AI does more than just boost efficiency. It fundamentally changes how data flows, how decisions are made, and how control can be exercised. As a result, AI not only opens up new opportunities but also places new demands on control mechanisms and security architectures.

A recent incident involving an internal AI system brings this dynamic into focus: In March 2026, McKinsey confirmed a vulnerability in its internal AI tool, Lilli. Although the vulnerability was quickly addressed, not all incidents receive as much public attention as, for example, the source code leak at Anthropic. What is problematic in both cases, however, is the pattern arising from the unfortunate interplay of sensitive knowledge assets and a technically vulnerable AI environment, in which data access, system logic, and interactions are closely intertwined.

The good news: Traditional cybersecurity principles remain relevant — but they must be further developed in a more targeted and consistent manner.

AI, however, cannot be reduced to a mere issue of technology or efficiency. It represents a fundamental shift in the way organizations process information, prepare decisions, and exercise control. As a result, the focus is shifting from purely securing systems to ensuring transparency, controllability, and the ability to make decisions at high speed.

This presents a clear opportunity, particularly for banks but also for all other companies: to design security architectures that are not only more robust but also more intelligent, thereby keeping pace with the speed and complexity of AI.

AI alters risks on two levels

The impact of AI on a company’s risk profile can be clearly described in terms of two levels – and this is precisely where the starting point for effective control lies:

Firstly, AI fundamentally changes the nature of attacks. AI increases the speed of attacks and enables them to be scaled systematically. The Open Worldwide Application Security Project (OWASP) describes this dynamic specifically for LLM applications: Manipulated inputs, inadequately verified outputs and new interaction patterns create attack surfaces that are difficult to detect using traditional attack pattern recognition methods. An attacker can use generative AI to systematically identify vulnerabilities in open API documentation and to automatically vary test attacks. What used to be sporadic and manual is now parallelized and continuously optimized. 

The threat is therefore no longer evolving linearly, but exponentially. The current debate surrounding Anthropic’s Mythos model underscores the acceleration of this development: Mythos was classified as too dangerous for the general public because it can autonomously discover and exploit software vulnerabilities.

Secondly, AI influences risks from an internal perspective. By using AI systems, organizations create new points of vulnerability that, in the event of a breach, could allow access to extensive and sensitive datasets. RAG architectures, copilots and AI-powered knowledge systems, in particular, consolidate internal documents, context of conversations, metadata, authorization logic and model interactions within a single environment. If access segregation is inadequately implemented, a single prompt can merge information that was previously organizationally separate. 

This creates a new integrity risk: anyone who influences system prompts, retrieval logic or model interactions can not only extract information but also distort the quality and reliability of responses. It is not only access to data that is critical, but also the manipulation of system logic and decision-making criteria. The European Union Agency for Cybersecurity (ENISA) and the National Institute of Standards and Technology (NIST) specifically highlight this combination of traditional cyber risks and AI-specific vulnerabilities.

The real challenge: ensuring controllability

Many organizations are currently investing heavily in AI – often at a faster pace than transparency and controllability can be established. This is despite clear regulatory requirements set out in the AI Act and DORA. In practice, it often turns out that, in the business unit, a SaaS-based assistant is being used, an external transcription service is connected in parallel, and model APIs are also being utilized – without there being a central view of which customer data is actually flowing where.

Traditional governance and security approaches are implicitly based on an assumption of stability: systems are clearly delineated, data flows are traceable, and controls can be designed ex ante. AI undermines these assumptions. Inputs, contexts, retrieval components, model layers and external APIs generate dynamic interrelationships. Control can therefore no longer be achieved solely through static policies or downstream checks but must be understood as a continuous steering capability.

A resilient approach to AI governance and security

A robust approach is achieved through the consistent combination of existing standards. NIST, ENISA and OWASP provide complementary building blocks for this – ranging from risk structuring and security architecture to specific attack scenarios. NIST structures AI risk management around the functions of Govern, Map, Measure and Manage. ENISA complements this with a multi-layered view of cybersecurity for AI through a scalable framework covering all AI stakeholders. OWASP identifies typical attack patterns in LLM applications. For banks and financial service providers, it is also relevant that the European Banking Authority (EBA) and the Bank for International Settlements (BIS) rightly emphasize the connection between AI, third-party dependencies, operational resilience and governance.

From this, a more practical target vision with clear implications can be derived: AI governance must be judged by whether it creates transparency, enables control and allows for sound management. Consequently, the aim should be to structure AI-related governance not primarily along organizational dimensions, but across three levels of control that combine business priorities, data transparency, risk comprehension and technical control. At the same time, it must fit into existing governance structures without “having to reinvent the wheel”.

The first level concerns transparency regarding the actual AI footprint. Organizations must be able to systematically track where AI is being used, which data flows into which systems, and which models, tools and third-party providers are involved. Without this transparency, effective control is virtually impossible.

The second level encompasses control in architecture and operations. Safeguards must be integrated directly into the structure of the systems – for example, through a clear separation of user input, system prompts and retrieval, as well as through controlled interfaces and clear requirements for the use of external models.

The third level focuses on continuous assurance rather than one-off authorization. AI systems are constantly evolving due to new data sources, adapted models or changes in usage. For example, a use case may be initially approved, but the underlying model, prompt logic or connected data sources may change later without a new risk assessment being conducted. Organizations therefore need an effective model to enable continuous measurement and testing: monitoring of outputs, traceability of critical responses, regular red-team and penetration tests, reviews of prompt and retrieval logic, and an understanding of incident response that takes AI-specific manipulations into account.

This approach requires answering three key questions: Where and for what purpose is AI actually being used? What data, models and external dependencies are involved? And how can we ensure that usage, control and risk remain in balance over the long term?

What banks in particular need to do differently now

This shift is particularly relevant for banks and other financial service providers. In this sector, AI intersects with highly regulated environments, sensitive data, complex process landscapes and a business model that relies heavily on trust. The BIS and the EBA rightly emphasize that AI places special demands on governance, third-party risk management and operational resilience.

This gives rise to four implications. Firstly, the traditional perimeter logic is shifting: the relevant boundary extends not only around systems, but also into the interactions between humans and systems, and between systems themselves. Secondly, data control is becoming dynamic: in addition to access rights, what matters is how data is contextualized, combined and further processed. Thirdly, vendor risk is increasingly structural, as AI is frequently delivered via APIs, foundation models and SaaS components. Fourthly, decision-making capability itself becomes a bottleneck: those who are unaware of the consequences of AI usage cannot take effective action.

This is precisely where the real “so what” lies for management: organizations must be empowered to systematically understand and manage AI-driven data flows, dependencies and vulnerabilities.

Why it is crucial to act now

AI is increasingly being integrated into processes, knowledge work, customer interaction and control logic within banks and financial services providers. As a result, its use is often growing faster than the ability to control it effectively. This is precisely where the risk lies – and, at the same time, the opportunity. However, risks already arise at the moment when organizations deploy AI in production without having established transparency regarding data flows, clear management of external dependencies and a robust control architecture.

Horn & Company supports banks precisely at this juncture: we first establish transparency regarding AI use cases, data flows, tools in use and dependencies on third-party providers, derive a robust risk profile from this, and translate it into an actionable governance and control architecture. In doing so, we combine business acumen, data governance, regulatory requirements and IT security into an integrated control framework – from assessing the current situation through defining objectives and prioritization to the concrete embedding of controls within processes, architecture and the operating model.

Our goal: not only to make AI manageable, but to leverage it specifically as a strategic advantage.

Ready to take the next step?

Whether you have initial ideas or concrete plans, we listen, ask questions, and develop them further together. In a non-binding initial consultation, we clarify where you stand and how we can support you.

//About the authors

//You might also be interested in

25. February 2026
Horn & Company and Otera Enter Strategic Partnership to Advance Agentic AI
Horn & Company and Otera form partnership for Agentic AI. The goal is scalable end-to-end automation of processes with measurable AI impact.
Read more
12. June 2025
A fundamental change in strategy as a way out of the crisis for the German automotive industry
Electrification and digital strategy urgently needed to keep the German automotive industry globally competitive.
Read more
19. May 2025
Digitizing individual processes, maintaining standards -squaring the circle 
Mit der Power Platform Prozesse flexibel digitalisieren – ohne bestehenden Standard-Kernsysteme zu verbiegen.
Read more