A balanced approach to deploying GenAI in banking
As new technologies emerge, there is a fine balance to strike between rushing headlong into deployment before the technology has sufficiently matured and missing the boat completely. Banks’ adoption of generative artificial intelligence (GenAI) is illustrative of the tension between these extremes.

Bridging the gap between cutting-edge AI advancements and practical, reliable digital banking solutions
Many challenger banks and neobanks embraced GenAI with great enthusiasm and jumped straight to outsourcing their customer-facing functions, such as call centre support. They jettisoned customer service personnel only to realise that the technology hasn’t reached the level of artificial general intelligence, where it matches or surpasses human capabilities. As such, some have had to pull back and rethink their GenAI deployments.
Traditional banks, on the other hand, have approached GenAI with much trepidation. These long-standing institutions are typically risk averse, have legacy technology stacks and are concerned with regulatory restrictions, which often means they are slow to begin experimenting with emerging technology. However, taking such an approach with a fast-evolving technology like GenAI essentially ensures being left behind the disruptors.
Clearly, neither extreme works. Waiting years until a regulator explicitly sanctions an emerging technology will hamper future competitiveness, but looking at the technology as a silver bullet and jumping in without considering potential shortcomings is also not the best way forward, as an organisation can lose trust in new technology prematurely.
Managing this balance requires a measured approach. Both traditional and challenger banks should identify appropriate GenAI use cases, performance test each one, assess potential risks, and then progressively expand deployment.
Debunking myths
While AI and AI-powered systems designed for conversational interfaces, such as chatbots, are not new in banking, ChatGPT and other large language models (LLMs) have ushered in a greater level of AI agent sophistication. In this new paradigm, it is possible to combine the vast knowledge of generic AI models with specific data of individual customers, including banking data, to create superior quality automated conversations.
Next-generation AI agents go beyond chatbots and are better able to process natural language and respond to complex requests. It’s no longer a simplistic two-way conversation, where a person interacting with conversational AI provides specific inputs and expects quasi-intelligent answers in response. Now, it’s possible to teach certain skills to conversational AI agents, combining unstructured conversations with well-structured, linear algorithms to perform certain actions.
This is a game changer. Previously, banking processes and policies were designed in a universal way; but now GenAI makes it possible to standardise based on the action or skill. For example, GenAI could be used to analyse a process, such as blocking or issuing a card, making a payment, or filing a complaint, and determine the applicable skill for each discrete task. This could fundamentally improve the level of customer personalisation and contextualisation.
However, it is important to understand that AI is a statistical model that generates responses based on certain probability, which means it can make mistakes and might not be as fast as expected.
Today, GenAI is not mature enough to be deployed without human intervention. If we try to design fully autonomous tasks, we are introducing significant risk into the process, such as hallucinations. For example, earlier this year Apple had to suspend a news summarisation AI widget that made repeated mistakes and confused facts, after media outlets complained.
This situation occurred because Apple rolled out a fully autonomous capability, where specific information was summarised in the absence of human supervision or intervention, and then presented as if it was a trustworthy source of information.
Route to adoption
Today, banks need to be realistic about GenAI’s limitations. Most people – even those in IT and bank leadership teams – don’t have a detailed understanding of how the technology works. This leads to a mismatch between expectation and reality, as well as harmful effects if GenAI is misapplied within the context of its limitations.
As such, individual banks need to explore specific use cases, run tests and advance deployment in small but steady steps to gain trust in adopting GenAI – iterate fast, fail fast and learn fast.
But to really drive forward adoption, the banking industry as a whole should look at training GenAI models, not just augmenting them. We should be creating sector-specific language models that will better suit the needs of discrete business segments. By training a specific model on banking practices, processes and terminology, which isn’t proprietary, individual financial institutions could feed the model with exclusive data, such as customer or bank data, in a private environment. Fundamentally, such an approach will create better quality results.
As we move to a multi-agentic environment, the output procured by one specific AI model could be used as an input to another specialised AI model, effectively creating team-like collaboration where each member of the team comes with their own unique knowledge, skills and perspective.
In future, banks could leverage GenAI by combining well-defined atomic skills (the smallest units of knowledge that make up a skill) with its ability to contextualise data, needs and behaviour. GenAI will be able to determine the process and intelligently apply these skills in the most efficient way. In addition, the sequence of applying them will be based on the individual customer, delivering hyperpersonalised products and services.
By taking this approach, GenAI could also be a force for good by helping to financially educate customers. For example, a bank’s GenAI agent could provide tailored advice and a call to action that is readable in the customer’s first language, adjusted to their education level and communication style, making it easy for them to comprehend.
While the customer is outsourcing the research part, they aren’t outsourcing the decision-making. The outcome may be based on the GenAI agent’s nudge, but is also educational, as it could be clearly explained why taking this particular action matters to them specifically, and not to every other bank customer or to a randomised segment of individuals between 18 and 25 years old.
AI-powered innovation at Plumery
At Plumery, we share this vision, helping banks and other financial institutions seamlessly integrate GenAI-driven intelligence into their digital ecosystems. Whether through AI-powered customer interactions or real-time financial engineering, Plumery enables financial institutions to leverage AI effectively while maintaining human oversight where it matters most.
By bridging the gap between cutting-edge AI advancements and practical, reliable digital banking solutions, we empower institutions to modernise with confidence, delivering transformative experiences without compromising security, compliance, or trust.
Sponsored by Plumery, the winner of Banking Tech Start-up of the Year at Banking Tech Awards 2024.