From castles to banking ecosystems: the continuous evolution of banking software
“I believe that the more you know about the past, the better you are prepared for the future.” – Theodore Roosevelt.
The history of banking software is a fascinating tale of adaptation, driven by the relentless march of technology and ever-changing customer demands.
This journey began with monolithic fortresses built in mainframe languages like COBOL, which transitioned through the component-based era dominated by Java and C++ and finally arrived at the dynamic world of microservices.
This week, I take a deeper dive into this evolution, exploring the strengths and limitations of each stage so that you can appreciate the future of banking software.
The COBOL castle: stability and security in a bygone era
In the very early days of computing, software was written in machine code, or first-generation language – essentially, these were instructions encoded in zeroes and ones. Next came second-generation language, where mnemonics provided instructions. It was not until the arrival of mainframe computers, which offered better storage and processing power and provided support for verbose third-generation languages like COBOL (COmmon Business Oriented Language), that programming computers became much more accessible.
It was not possible to run software programmes simultaneously so that different programmes could interact with each other. Hence, banking solutions were developed as monolithic applications: single programmes encompassing all banking functionalities – accounts, transactions, loans, and everything in between. Imagine a huge castle housing every department of a bank under one roof.
This approach offered:
- Stability: COBOL was a mature and reliable language, ensuring consistent operation.
- Security: Centralised data storage within a single system minimised vulnerability.
- Simplicity: Development and deployment were straightforward due to the unified codebase.
However, these fortresses had their cracks:
- Scalability: Adding new features or functionalities became akin to adding extensions to a completed building – a complex and potentially disruptive endeavour, not least because people were already using the building.
- Agility: Changes and updates were slow and cumbersome, hindering innovation.
- Vendor lock-in: Banks often depended on specific COBOL vendors for maintenance and upgrades, limiting flexibility.
Remember that these systems were developed for staff as the end users. So, there was no concern for scalability, as mainframe access for a concurrent number of staff could easily be accommodated.
The Java component revolution: breaking down the walls for agility
As the internet heralded a shift in the financial landscape towards a more digital future, the limitations of monolithic systems became increasingly apparent, not least because of the need to serve millions of customers rather than thousands of staff.
Enter the era of component-based architecture, spearheaded by Java. This approach broke down the monolithic structure into smaller, self-contained components, each with a well-defined function. Imagine transforming the previously monolithic castle into separate buildings – one for accounts, another for transactions, and so on. Although separate, these buildings were bound in one location, like a village with a stone wall surrounding it.
This componentised approach offered:
- Improved agility: Individual components could be developed and deployed independently, allowing for faster innovation.
- Enhanced maintainability: Fixing bugs or updating specific functionalities became more manageable.
- Vendor flexibility: Banks were no longer restricted to a single vendor for the entire system.
However, component-based systems still had limitations:
- Integration complexity: Developing robust communication channels between components added a layer of complexity.
- Scalability bottlenecks: While some components could scale independently, overall system scalability could still be hindered.
- Software flexibility: Components had to reside on the same computer to interact with each other.
The microservices metropolis: a dynamic ecosystem for innovation
The need for even greater agility and flexibility paved the way for microservices architecture. This approach deconstructs the component-based system further, creating a collection of even smaller, highly focused services.
Each service has a single responsibility, such as account verification or loan approval. Leveraging internet protocols, these components no longer had to reside on the same machine. Imagine transforming the separate buildings of the component era into specialised micro-structures, each handling a specific task within the broader banking ecosystem without the physical restriction of residing on one computer.
Microservices offer:
- Unmatched agility: Independent development, deployment, and scaling of individual services enables rapid innovation and experimentation.
- Resilience: If one microservice fails, the entire system isn’t compromised. Other services can continue to function, minimising downtime.
- Openness: Microservices rely on APIs that facilitate communication and integration with external systems, paving the way for open banking initiatives.
- Flexibility: Unlike the previous generation of technology, these services need not be on the same computer or even the same network – they can be accessed with platform-agnostic APIs across the internet.
However, the microservices approach also comes with its own challenges:
- Complexity management: Coordinating and managing a multitude of services requires robust orchestration and monitoring tools.
- Testing challenges: Testing the interactions and dependencies between numerous microservices can be intricate.
- Security concerns: The architecture’s distributed nature creates a wider attack surface, necessitating a more comprehensive security strategy.
The road ahead: welcome to banking technology plus
The evolution of banking software isn’t a clear-cut shift from one architecture to another. As technology continues to evolve over time, every bank, no matter how new, will eventually encompass a mix of technology from different eras.
Instead of trying to replace an entire system, many banks now realise that building around their core systems is possible. This mitigates the need for hugely risky and expensive multi-year projects while leveraging modern technology to address business needs in a much faster timeframe. This process is commonly called “hollowing out the core,” essentially removing responsibility from legacy systems gradually without the need to replace their core systems.
As such, these banks are adopting a hybrid approach, leveraging the strengths of each stage. Legacy COBOL systems can be integrated with microservices for specific functionalities, creating a best-of-both-worlds scenario. As technology continues to evolve, the banking software landscape will undoubtedly keep pace, with new advancements and architectures shaping the future of financial services. I call this new landscape ‘banking technology plus’.
Banking technology plus
I define banking technology plus as a core modernisation strategy, which means the bank’s legacy core banking system is left in place and augmented with powerful new capabilities leveraging modern technologies like cloud and AI.
These new components target only specific capabilities based on business drivers. So, for example:
- A new product management capability can be added so that the existing core simply manages the ledger while the new component handles all product-specific rules and behaviour. The ledger would then only be a record of debits and credits.
- A new fraud capability leveraging AI and new data sources can replace a legacy rules-based fraud capability.
- Credit decisioning can be updated by diverting calls via APIs to a new credit risk module rather than using the old capability in existing technology.
Further still, virtual accounts can be created above the existing core, reducing the cost of additional accounts on the core and other loads like payment processing.
This enables faster time to market to achieve real business goals and reduces the immense risks associated with core banking replacement projects. Over time, using this approach, the legacy banking system will perform less and less of the banking functionality, and new modules will do more and more. Eventually, the old banking system could be replaced if required, but at much lower risk as most of the banking functionality now resides outside.
As I identified in my previous column piece, “One core to rule them all”, banking technology plus should be implemented as a Software-as-a-Service (SaaS) from a public cloud platform, thereby dramatically reducing the cost and effort of implementation, maintenance, and operation.
This week, I’ve delved deep into the past to explain how I got to my “one core to rule them all” concept. I combined these requirements with a core modernisation implementation approach.
Going forward, I see a shift from costly/risky core replacement towards a more targeted, agile core modernisation approach which I call banking technology plus, which is something I will expand on in future posts.
About the author
Dharmesh Mistry has been in banking for more than 30 years both in senior positions at Tier 1 banks and as a serial entrepreneur. He has been at the forefront of banking technology and innovation, from the very first internet and mobile banking apps to artificial intelligence (AI) and virtual reality (VR).
He has been on both sides of the fence and he’s not afraid to share his opinions.
He founded proptech start-up AskHomey (sold to a private investor in spring 2023) and is an investor and mentor in proptech and fintech. He also co-hosts the Demystify Podcast.
Follow Dharmesh on X @dharmeshmistry and LinkedIn.
Read all his “I’m just saying” musings here.
Interesting. However many large banks run on private cloud which may not fit your “ One core to fit all” approach due to several migrations/ exchanges between public& pvt clouds as an example.. do you agree?
Hi Vandana, I agree there are different requirements for Large (tier 1/2) banks than smaller banks (Tier 3 and smaller). For the smaller banks public cloud served on a multi-tennant solutions is the best price point for them.