AI-driven financial services must also champion fraud prevention, FCA warns
The increasing adoption of artificial intelligence (AI) within the financial services industry must be backed by sophisticated and proven fraud prevention measures, the UK’s Financial Conduct Authority (FCA) has warned.
This was the primary narrative of a speech given by Nikhil Rathi, the regulator’s chief executive, at the Economist Impact event in London this week.
Rathi’s speech centred around the growing impact of AI within the financial services industry, including the advent of new opportunities and risks, the role and implications of Big Tech firms as “gatekeepers of data” and the collaborative effort, led by the FCA, in ensuring the market’s integrity amid its growing relationship with the technology.
Rathi largely recognised the benefits AI is generating for the market, such as automation and cost reduction, however, he also emphasised the likelihood and detriment of imbalances if the technology was “unleashed unfettered”.
“This means that as AI is further adopted, the investment in fraud prevention and operational and cyber resilience will have to accelerate simultaneously,” his speech read.
Stamping out cyber risks
Rathi confirmed the regulator’s “robust line” on this matter in its “full support for beneficial innovation alongside proportionate protections”.
“We will remain super vigilant on how firms mitigate cyber-risks and fraud given the likelihood that these will rise,” he states.
Aside from its continued investment into its technology horizon scanning and synthetic data capabilities, the FCA’s stance on AI regulation is also being backed by its newly-established digital sandbox.
This sandbox will leverage real data from transactions, social media and other synthetic sources to support the safe and compliant development of fintech innovation.
Internally, the FCA has also developed its own supervision technology, using AI methods for firm segmentation, the monitoring of portfolios and to identify risky behaviours.
Rathi also made clear that while it’s not within the regulator’s remit to regulate AI itself, it is responsible for how the technology interacts with the provision and use of financial services.
This stance is becoming increasingly necessary given the contagion effect AI has posed on the industry. Rathi cites amplified trading day volatility, in comparison to the 2008 global financial crash, as a clear cut example of this in action.
“This surge in intraday short-term trading across markets and asset classes suggests investors are increasingly turning to highly automated strategies,” he says.
The watchdog will test how its existing rules on senior managers’ accountability at firms it regulates and forthcoming consumer duty on firms towards their customers can manage risks and develop opportunities from AI.
Above all, it remains clear from Rathi’s comments that if the financial industry is to prosper from the advent of AI, it must also allow and encourage the appropriate security measures to prosper simultaneously.