Banking on data
With the European Parliament recently issuing the world’s first set of rules on AI, I’m switching my focus a little this week towards banks’ use of data.
For me, although I have been coding since I was 12, it was not until much later that I realised the true power of data.
I started at Lloyds Bank in 1986 as a trainee (not graduate) programmer and was involved in a huge programme rolling out networked PCs to the bank’s branches. To put this in context, the programme would eventually implement local area networks in over 2,000 branches with over 40,000 connected PCs. Each branch would be connected via the bank’s own wide area network to its mainframes.
I was part of a team developing some of the very first branch applications based on the IBM 4700, which effectively acted as a server. After these were rolled out and the PCs had been installed, the bank went on to develop Microsoft Windows-based applications. At this point, we had also started developing a customer database in DB2. While I was dealing with many different applications and different datasets, I still only saw data as input/output from a software programme.
It was around about 1992 when I was on a new project that I started to understand the power of data. The bank had analysed its customer database and realised that 80% of its profits were being generated by 5% of its customers. These customers weren’t necessarily wealthy, they just did most of their banking with Lloyds. We used a rough calculation of customer profitability to identify the most valuable customers to the bank. The project I worked on was to provide a “local customer database” to branch managers so that they could manage these customers personally. Initially, the goal was just to ensure branch managers kept their most profitable customers. We developed a small Windows app in Visual Basic that allowed branch managers to query this database. At this time, there was no Microsoft Access or local database, so I wrote a query engine on top of an indexed file solution.
My first data epiphany came on this project when I had a meeting with the bank’s head of credit risk (Peter Harvey) and his understudy (Lewis Hyam, who went on to be the bank’s first chief data officer). Peter was a recognised global expert in credit risk, and in our meeting, he asked, “What else can we do with this data we have, beyond credit risk?”
I was confused, surely anything you want! “What is it you want Peter? Can you give me an example?” I asked.
“For example, can we determine if a customer is about to leave the bank?” he said.
My first answer was, “Only if they tell us.”
However, it was later in a workshop with Lewis we came up with a possible solution. Essentially, if we saw a trend of reduced standing orders/direct debits and cashpoint transactions, we could deduce that the customer had started to use an account at another bank. Of course, there were more indicators, like regular transfers to an external account or salary payments stopping. But it was here I learnt the key lesson that data tells a story, and that a combination of different fields can be extremely powerful to understand customer behaviour.
Despite not having the luxury of high-end PCs and databases, we still found a way to “play with the data” to answer more questions about customers. I created another app for creating and reporting trends. At the time, storing data on mainframes was expensive, so not all data had “history”. This additional app tracked and reported on a field like “monthly cashpoint transactions” over several months to show a trend in behaviour.
The project itself had more severe challenges, bearing in mind we were giving the branch managers some of the very first laptop computers and a modem to download data. On top of this, we expected them to use quite advanced applications and to “analyse” data. Like many start-ups, it is not the idea or intent that makes a project a success or failure. The most important factor is how quickly users can extract value from a solution and ensuring that that value is important to them.
Despite having written a number of applications, this was the first time I truly understood the power of data. I learnt that important new data could be generated, like with the customer profitability calculation. Then, I learnt that patterns of different data fields could tell us more about customer behaviour, and that trends were another form of data that could be valuable. As a coder, I always thought that software was key, but I soon realised the value was really in the data.
This week, I’m just saying that before we hand over all our valuable data to AI, we should all look to improve our understanding of data and its possibilities. This is not something that should be left to just IT staff. This way, we can better understand what can and should be done by AI as well as what can’t or shouldn’t.
About the author
Dharmesh Mistry has been in banking for more than 30 years both in senior positions at Tier 1 banks and as a serial entrepreneur. He has been at the forefront of banking technology and innovation, from the very first internet and mobile banking apps to artificial intelligence (AI) and virtual reality (VR).
He has been on both sides of the fence and he’s not afraid to share his opinions.
He founded proptech start-up AskHomey (sold to a private investor in spring 2023) and is an investor and mentor in proptech and fintech. He also co-hosts the Demystify Podcast.
Follow Dharmesh on Twitter @dharmeshmistry and LinkedIn.
Read all his “I’m just saying” musings here.