Sibos 2023: Empathy and AI – banking’s duty of care
What happens when all aspects of our lives are optimised and automated? When our next moves are predicted by faceless algorithms?
And when that privilege is accessible only to a select few, what then will become of our society — and humanity?
Take a moment and think about our day-to-day lives. What we read from social media sites and news apps is often dictated by what the algorithms think will resonate with us. Have you ever clicked on a cat video, and suddenly you see cat videos everywhere? And what happens when you click like and comment on someone’s posts multiple times? With each accolade, you become more embedded with each other’s digital persona, and you start spiraling down the rabbit hole and end up on a strange path without even realising it. Do I dare ask, will our relationships one day become so optimised and transactional that we seek each other out only when we see it as something we can benefit from and monetise?
I’d rather not imagine. But I do fear that’s what we are slowly sleepwalking towards. In the heart of it all, are trust and empathy — the very qualities that make us humans.
Friend or foe?
It is hard to imagine that ChatGPT has only been introduced recently. The tool has quite literally taken the world by storm, accumulating over 100 million users in its first two months following its launch in late November. It has also slowly seeped into corporate lives as well — notwithstanding limitations imposed by some major corporations including Apple, Samsung, Bank of America, Deutsche Bank and JP Morgan Chase. Some of the most common use cases so far for generative AI include coding and debugging, content creation, and chatbots for customer support. While these areas are still mostly low-hanging fruit, it has caused quite a stir already in most business sectors.
Among the common concerns around generative AI systems in general are a lack of transparency in the Large Language Models (LLM) used for training and the potential for these tools to be used for malicious purposes. And there is no lack of news stories around AI hallucination. In an era where we already have an overabundance of disinformation and scams, the potential impact for generative AI to be used to craft misinformation or to create convincing phishing messages is grave. Even more concerning, according to researchers, voice-cloning AI can learn and simulate someone’s voice from a three-second sample — creating a whole new level of ability for scammers to exploit.
Trust and security must be the DNA of an organisation and the solution that is built — especially for something as powerful as generative AI, which could have repercussions across all demographics and segments of society. It simply cannot be an afterthought.
The danger from adversarial AI is real and present. But it has nothing to do with the sensational stories that some have led you to believe, such as sentient robots. Rather, it is something even more fundamental. When we can no longer believe what we see nor what we hear, what does that do to a society where trust is what binds us?
Duty of care
It’s one thing when AI recommends a wrong book for purchase. It’s a different thing with grave consequences when AI perpetuates the systemic biases that we are still confronting today, such as limiting access to loans for certain demographics of the population.
And the topic of inclusion is particularly important in addressing the economic divide. While it is exciting to see newer payment methods such as voice-enabled payments being introduced, we must ensure that we are not introducing new challenges around user protection and privacy. Not to mention, we must be thoughtful in creating inclusive solutions that can support a society filled with a diverse set of languages and dialects.
Borrowing the words of James Barker and Leda Glyptis, “If you are not reliable, you are liable.” This is especially true for a highly regulated industry such as banking. This is not to say we cannot experiment with generative AI. Rather, we must ensure that proper guardrails are in place before jumping into something because it looks shiny and exciting.
Ultimately, the responsibilities lie with humans — not faceless robots — to select the data sets and train the algorithms, and to decide where, when, and how the tools will be deployed, and for whom they will serve. Who is at the decision-making table matters.
Choose wisely — with empathy.
About the author
Theodora (Theo) Lau is the founder of Unconventional Ventures. She is the co-author of Beyond Good and co-host of One Vision, a podcast on fintech and innovation.
She is also a regular contributor for top industry events and publications, including Harvard Business Review and Nikkei Asian Review.