How the EU AI Act could transform financial services
While artificial intelligence (AI) has enjoyed a huge spike in popularity across all provisions of financial services over the last year or so, European regulators have set themselves the ambitious task of keeping on top of a technology that appears to show no signs of slowing down with the imminent arrival of the EU AI Act.
AI’s impact on the evolution of financial services has extended to everything from the personalisation of the banking experience for customers to streamlining lending assessments to alpha investing strategies and avant-garde chatbot services. As such, the technology has thoroughly left no stone unturned in its effort to make its mark on the market.
And while it has cultivated a myriad of benefits for the industry and the customers it seeks to serve during its rise to prominence, it is this very prominence that has spurred regulators to question, at length, the potential risks of a technology experiencing such a proliferation of innovation.
How and why AI produces the decisions that it does appears to be among the top considerations for the European Commission (EC) which, since 2021, has sought to deliver a first-of-its-kind regulation that aims to bring the technology into a more desirable scope.
Righting the risk
At its core, the EU AI Act seeks to implement a horizontal regulatory structure capable of ensuring that any system that is placed on the EU market is altogether trustworthy, safe and lawful. It will apply to both providers operating within the EU and also to third-party providers based in other countries.
The EC has adopted a risk-based approach to achieve this, and will use its latest regulation to determine whether a system, as pre-defined by the Organisation for Economic Co-operation and Development (OECD), presents an unacceptable, high, limited or low/minimal level of risk to the end user.
A system’s scoring on this chart will determine which level of regulation it falls subject to. Those presenting an unacceptable level of risk are prohibited by the EC, with the blacklist including identification systems that use biometric technology remotely, systems that encourage dangerous behaviours through cognitive manipulation, and also those that can be used for social scoring purposes.
Moving down the chart, high-risk systems – or those that could potentially pose an “adverse impact on people’s safety or their fundamental rights” – will be permitted, according to the draft regulation, but will need to adhere to a new set of rules.
These rules pertain to risk management, data training, transparency, cybersecurity and testing, and will require providers to register with an EU-wide database prior to distribution.
The act divides high-risk systems into two separate subcategories: systems that are used in products that fall under existing product safety regulations (namely cars, medical equipment and toys) and systems that are used in biometrics, critical infrastructure management and law enforcement, along with five other areas of industry.
Further down, systems that present a limited level of risk to the end user, including chatbots and biometric sorting systems, will now have to operate under “a limited set of transparency obligations”, the EC says.
This will see AI-generated audio, image and video content be labelled as such, giving the user the option as to whether or not to continue their interaction with the technology.
Although low/minimal-risk systems, by default, will not need to conform to any additional regulatory requirements, it is being encouraged by the act that providers of such systems abide by a theorised “code of conduct” that would take a similar shape to the regulation incurred by their high-risk counterparts, chiefly for the purpose of aiding market conformity.
Elsewhere, the act also makes special provisions for generative AI (GenAI) and the content it generates, as well as the delegation of national supervisory and market surveillance bodies.
This nutshell view of the regulation’s progress thus far confirms that the EC is actively pursuing a tempered approach to regulating a technology that continues to develop at speed.
When AI meets finance
The EC is casting a wide net through its approach to regulating AI, which has marked its distinct understanding of the technology’s wide and varied application. So how will the act impact the world of finance and banking – two already heavily regulated industries?
Recent data collected by FIS reveals that while 30% of consumers “do not trust GenAI at all”, the arrival and implementation of regulation, enhanced human oversight and further transparency measures are all viewed as suitable tools to breaking down this barrier.
“Financial institutions have an opportunity to educate their customers on how they are embracing innovation with this new technology,” explains Silvia Mensdorff-Pouilly, senior vice president and general manager of banking and payments EMEA at FIS. “Crucially, they must be transparent about how they are using data if they are to succeed in winning trust.”
It is this crucial element of transparency that firms providing banking and financial services must now adhere to, setting the industry on a new course of AI development and promoting the delivery of even higher standards.
And this necessity to adhere to higher, cleaner and sharper transparency standards could hold the potential to inspire the next generation of services and innovation across the industry. Ryan Cox, senior director and co-head of AI at Synechron, a provider of AI solutions for fintechs and financial services, recognises how, in an industry that is “both strategic yet reactive”, the ascent of the EU AI Act could spur not only the increased prioritisation of transparency, but also the beginning of a new data exercise as a means of compliance.
“Trust is going to be really important, and the act has become more of a data exercise than anything else,” he comments. “It will encourage firms to tie in the source of information all the way, while simultaneously overlapping with different pre-existing regulations. Firms must now seek to balance compliance with innovation, and must really think through the legislation before distributing their services.”
Regulatory rigor
Given the general consensus towards AI and especially GenAI as revealed through FIS’ research, there are still some questions around whether the act goes far enough to fully appease the doubts and concerns that surround the technology.
For Michael Charles Borrelli, co-CEO and COO of AI & Partners, a professional services firm specialising in compliance with the EU AI Act, certain provisions highlighted in the EC’s latest regulatory endeavour “may need further clarification”. With this, he also recognises that calls for “more stringent measures”, especially regarding high-risk applications, might soon be on the horizon.
“The regulation is undoubtedly a positive development, but continuous evaluation and potential adjustments may be necessary as the technology evolves,” he explains.
On the other hand, Niamh Kingsley stresses the opinion that the act “does enough to regulate AI for now”. While serving as head of product innovation and AI at Delta Capita, Kingsley also provides thought leadership in the industry, and recently sat on the UK Parliament’s AI Summit panel.
“I wouldn’t want to see any more detail in the initial legislation – I believe we need to move to the ‘let’s get it done’ phase – but I would like to see a well-managed iteration process as we start to work with the guidance,” she explains.
“For example, the act requires transparency through documentation and model submission, but does not provide frameworks or templates for doing that effectively and without significant legal overheads. I’d like to see collaboration and agility in the regulation, and let it evolve as we all gain experience in the market.”
Given the act’s levelled and risk-based approach to regulating AI, inter-industry discrepancies surrounding its efficiency and extent are somewhat to be expected, especially considering that the regulation enforces varied levels of compliance for different types of systems performing a wide range of tasks.
Next steps
As expected from the EC, the development of the EU AI Act has, to date, been considerably well-paced. As for latest developments, both the European Parliament and European Council – which initially called on the Commission to bring the act into fruition with its recommendations on civil law rules on robotics published back in 2017 – reached a provisional agreement in December to finalise which systems it applied to and the dynamics of regulations being enforced.
A European Parliament vote to bring the act into force is scheduled for April. If passed, the EC will expect member states to phase out prohibited systems within the next six months and apply general purpose AI governance within the next 12 months.
Further down the line, the act is due to become fully applicable, including obligations for high-risk systems, 24 months after entering into law, followed by the obligations for high-risk systems at 36 months.
The arrival of the EU AI Act will certainly present a welcomed learning curve for the industry and its exploration of the technology, and although elements of it may arguably need fine-tuning with the natural passage of time, it nonetheless marks a strong attempt to achieve what many regulators across the world feared could never be achieved.