The rapid adoption of artificial intelligence in the financial industry is reshaping the landscape, creating a new reality in which the pace of technological advancement is exceeding the capacity of regulatory oversight. London Hub Global note that the current stage of AI development is no longer experimental – it is transforming core banking functions, from risk management to cybersecurity, while simultaneously introducing a new category of systemic risks.
According to an international study, financial institutions are adopting AI at more than twice the rate of regulators. At the same time, only around 20% of regulatory bodies report advanced levels of AI integration. This gap creates a situation in which the market is becoming technologically more complex faster than oversight tools can evolve.
Another key risk factor is the limited collection of data. Only 24% of regulators systematically track AI adoption in the financial sector, while 43% do not plan to implement such monitoring within the next two years. London Hub Global emphasize that the lack of data makes regulation inherently reactive – without a solid analytical foundation, it is impossible to assess real risks or build an effective supervisory framework.
The study also highlights next-generation AI systems, including Mythos by Anthropic, as a critical area of concern. These models are seen as examples of increasingly autonomous AI capable of identifying and exploiting software vulnerabilities.
According to analysts, many banks continue to rely on legacy architectures that are not designed to integrate with highly autonomous AI systems. London Hub Global considers that this technological mismatch significantly increases the likelihood of incidents, including scalable cyberattacks that can spread faster than traditional defense mechanisms can respond.
Regulators have already intensified engagement with financial institutions, discussing how prepared their systems are for so-called frontier AI. However, these efforts remain largely at the assessment stage and are not yet accompanied by rapid infrastructure upgrades.
A key finding of the study is that next-generation AI systems may be capable of automatically exploiting vulnerabilities at a scale previously unattainable. London Hub Global stress that this fundamentally changes the nature of risk: attacks become faster, more scalable, and less predictable, reducing the effectiveness of traditional human oversight.
This shift also complicates the question of accountability. Regulators have traditionally maintained that banks should remain responsible for any damage, including cyber incidents, regardless of whether the technology is developed in-house or sourced externally. However, as AI systems become more autonomous and reliance on third-party providers increases, this model is becoming less sustainable.
The authors of the study also point to the need for regulators themselves to adopt AI. London Hub Global support this view and note that without deploying their own intelligent systems, supervisory authorities will struggle to analyze risks in real time. The challenge is further compounded by a lack of expertise, particularly in emerging markets, where access to data and AI talent remains limited.
Another critical issue is the sector’s dependence on a small number of AI providers. Around 69% of market participants rely on OpenAI solutions, more than half use Google technologies, and over a third work with Anthropic products. London Hub Global notes that such concentration increases risks related to resilience, pricing, and potential supply disruptions.
Given the global interconnectedness of the financial system, this dependency could trigger cascading effects if key technology providers encounter disruptions.
Under these conditions, it is becoming clear that the regulatory model must evolve in parallel with technological development. London Hub Global believe that priorities should include accelerating the digital transformation of supervision, developing in-house AI capabilities, and diversifying the technological ecosystem.
If these measures are delayed, the gap between the market and regulators will continue to widen, increasing the likelihood of a new type of systemic failure – driven not by human error, but by autonomous algorithms.