Sibos 2018: Can banking embrace AI to better foster trust?
Let’s put our trust in Richard Buckle, founder and CEO of Pyalla Technologies, to answer whether banking can embrace artificial intelligence to better foster trust.
The article I wrote a short time ago began in Dallas, Texas. Nothing unusual about that but many years ago as an Australian living in Canada, with ambitions to eventually work in the US, my first serious foray into the US was two months I spent living in Dallas. The nostalgic similarity doesn’t end in Dallas as shortly I will be back in Sydney for Sibos 2018 and this time I am a naturalised US citizen visiting town.
It has been five years since I lasted visited the Antipodes and I expect to see a lot of changes as a result of the ongoing building boom, particularly around the harbour and in the central business district where a number of Australian banks have their headquarters. However, trust me, banks are having it tough in Australia as more misdeeds continue to surface with a Royal Commission looking into their practices.
To give this Royal Commission its full title, it may come as a surprise for many that indeed, this is “The Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry”.
Whenever I read of misconduct it conjures up all sorts of shenanigans but rarely do I associate any shenanigans with staid old bricks and mortar banks. Indeed, in that article I wrote and referenced above, I quoted from an article published back in February, “Fintechs see their profile rise, with banks playing a large part”. Anthony Rjeily, digital and fintech practice leader for KPMG, had said that: “Right now there’s a high level of interest on both sides when it comes to the marriage of fintechs and banks … Both sides bring value propositions that are very complementary: Fintechs bring an innovation culture, new business models and new methodologies that banks struggle with due to legacy infrastructure. On the flip side, banks have the consumer trust and distribution channels” that many fintechs lack.
Misconduct and trust rarely appear in the same paragraph of any article on finance, but here you have it. Even as a potential downside for fintechs is lack of trust, banking seems to be offering breadcrumbs as they succumb to a serious erosion of public trust. And yet, many observers keep their fingers crossed in the belief that this is all a temporary apparition in Australia and the banks will get better.
However, one alternative view is that it really isn’t the banks at fault, but rather the people in the banks having apparently lost all sense of good old Aussie “fair go”– could it be possible to take those bankers out of banking? If so, why not all the bankers – could it be possible to automate in such a manner that decision making is guided by algorithms?
With as many discussions taking place around artificial intelligence (AI) as there are these days, whether in the media or during conferences, the question has to be asked – can banking embrace AI to better foster trust and indeed, if banks are having problems as fundamental as ensuring trust, does this throw open the door even wider for fintechs?
Or is this still all backwards, is the future of AI more about trust than what we expect from banks and fintechs? Just as important, if the face of our bank manager reveals nothing, what of the algorithms powering AI – will we trust essentially the black box that is the computer and its algorithms?
In an article published in July, “The Future of Artificial Intelligence Depends on Trust”, authors Anand Rao and Euan Cameron note how “some machine-learning models that underlie AI applications qualify as black boxes, meaning we can’t always understand exactly how a given algorithm has decided what action to take.
It is human nature to distrust what we don’t understand, and much about AI may not be completely clear. And since distrust goes hand in hand with lack of acceptance, it becomes imperative for companies to open the black box”.
Offering one explanation, Rao and Cameron then suggest that it will be important “to open up the AI black box and facilitate trust, companies must develop AI systems that perform reliably — that is, make correct decisions — time after time. The machine-learning models on which the systems are based must also be transparent, explainable, and able to achieve repeatable results. We call this combination of features an AI model’s interpretability”.
The misconduct of Australian banks certainly doesn’t help strengthen trust between them and their customers. Furthermore, the prospect of somehow banks and fintechs getting together because banks are more trustworthy than fintechs doesn’t ring true for all markets.
On the other hand, with all of its upside potential to improve trust between financial institutions (FIs) and their customers, it will take time before AI becomes trustworthy – remember the premise of the Tom Cruise movie, Minority Report, where a pre-crime unit, a specialised police department apprehending criminals based on foreknowledge – what could possibly go wrong? You didn’t get your car loan? No, we cannot tell you how we came to the conclusion as it is proprietary. Great!
It is a real shame we didn’t have some form of foreknowledge highlighting the ongoing misconduct among bankers. On the other hand if it was all just a case of simply getting customers to deepen the trust they have in their respective FIs, we might not even need computers or AI.
Of course, trust me, isn’t that all we want from those who watch over our money?
Click here to see more of what’s going on at Sibos, including our flagship Daily News at Sibos editions.
Follow us on Twitter @DailyNewsSibos