Putting trust in chatbots
Artificial intelligence (AI) interfaces and chatbots could be revolutionary for financial institutions – but only if they strike the right balance between human and machine interaction, argues Jeremy Pounder, futures director at Mindshare.
AI is changing the banking industry as we know it. Already, banks are using AI within everyday payments, money management and digital self-service. For instance, voice recognition technology is being used by the likes of Barclays as a form of secure ID for telephone banking customers, while challenger institution Atom Bank allows its customers to log on via a facial recognition system.
AI’s advanced natural language processing and machine learning means it can generalise large data sets and detect and extrapolate patterns in order to create new solutions and actions. It is within this space that AI chat emerges as a transformative tool, curating financial institutions endless data overload into manageable chunks, where machine assistance can enable human assistance to make self-service easier.
For customers, chatbots promise a better, faster experience – less time spent waiting in the cashier queue or navigating through call centre menus; problems being resolved more quickly. For banks the promise is greater operational efficiencies on top of a better user experience – if chatbots can handle basic queries about payments, overdrafts and savings, call centre operators can be freed up to handle the more complex queries adding greater value to the business overall.
However, the challenge chatbots present is that through conversations with AI some customers are left feeling somewhat dissatisfied, empty or even de-humanised. A conversation with a chatbot can be underwhelming, irritating and could feel odd.
So how can banks build chatbots that leave people feeling more not less human as a result of the experience? How can banks build chatbots that live up to their promise? These are the questions we’ve set out to answer through our recently published research “Humanity in the Machine”, in collaboration with Goldsmiths, University of London and IBM Watson.
Firstly, financial institutions need to focus on building trust. Through a series of biometric experiments measuring stress levels, we found that users are less forgiving of machines making mistakes than humans.
That means banks need to be conservative in their ambitions and in their early iterations to make sure that their chatbot doesn’t get much wrong in order to build up trust. That may mean asking more questions than is technically required to provide an answer, in order to build confidence in the results.
But intriguingly we found that consumers are often more trusting of chatbots around sensitive information than they are of human customer service operators. This was highlighted in the research, with some respondents suggesting the process of figuring out expenses was difficult and time consuming, and would be more comfortable relying on the support of a chatbot than an actual human.
The research also showed that people find chatbots more reliable in terms of collecting and storing large amounts of information, as well as being more accurate in their responses. Over 25% say they are happier to give sensitive information to a chatbot.
People are prepared to trust chatbots, and so banks now need to make sure their development decisions build on this rather than undermine it.
Secondly, institutions need to align the chatbot’s tone of voice with their values without coming across as trying to be too “chatty”.
Working with IBM Watson we set out to explore the tone of voice issue, by testing two alternative banking chatbots with very different personalities: one was chatty, informal and conversational; the other was more straightforward, with a serious and functional tone of voice.
Many found the chattier version unnecessarily off-putting, patronising or even weird. As one respondent put it: “The chatty one is like my Dad when he uses emoticons, it’s creepy.”
Banks need to give the chatbot a tone of voice which expresses their personality in a way that is flexible, contextual and personalised to different users and different situations. This will mean using copywriters alongside programmers to create consistent style and tone.
Finally, banking organisations need to avoid making the chatbot “human” an end in itself. What defines a “human” AI does not depend on how human the AI appears to be, or how life-like its interactions are. What defines a human experience is the experience itself – it is measured by how a person feels when dealing with the AI, and not some intrinsic humanity in the technology.
A “human” experience is defined by how the user feels, not how life-like the chatbot is.
Chatbots should aim to use context and emotional understanding to deliver a “human” experience by meeting the user’s need. In doing this the style of the chatbot should ideally go unnoticed.
If it feels too robotic, then interacting with it leaves the customer feeling de-humanised. If it’s too life-like then the user can be left feeling patronised or even disturbed.
The challenge is to get the balance right and leave the user feeling as though they have had a human experience. And, crucially, to avoid falling into the “uncanny valley” by creating a chatbot that feels creepy by attempting to emulate humanity.
We’re all going to be getting used to communicating with AI in one form or another over the coming years. For all industries including banks, there is much at stake here.
Get it right and it’s a mutually beneficial situation where customers enjoy a better experience which enhances their humanity and banks can cut their operational costs and enhance their reputation amongst consumers. Get it wrong and a bank’s reputation could be on the line.
Thanks for the explanation about bots and what they can do or could do. A little more business integrations and better customer experience will work out wonders.