Inaccuracies of facial recognition: security, prejudice, pollution and fashion
Facial recognition is fast becoming a reality in our day to day lives. It’s rapidly being deployed to replace more traditional security checks such as: phone lock screen passwords, banking app logins, physical keys, border control gates and point of sale (PoS) purchases.
China’s Ant Financial lets users pay for food at major vendors such as KFC with their faces, Ford and Intel give car owners access to their vehicles using their face, and energy company Chevron uses people’s faces to permit them access to its power plants.
But earlier this month, the biometric technology was pulled under the spotlight again after the UK’s Metropolitan Police decided to roll-out facial recognition CCTV cameras at a public shopping centre in east London. Despite warnings from its own watchdogs over its effectiveness, or lack thereof, the Met argued it was an “intelligence-led” initiative to curb violence in the area.
And they aren’t the only law enforcement body to use the technology. Amazon is selling its facial recognition product – Amazon Rekognition – to a host of law enforcement agencies, one of them being the Washington County, Oregon, Sheriff’s Office which uses it to sort out “persons of interest” against their mug shots.
It’s safe to say facial recognition technology is utterly versatile, and it isn’t just used for security purposes. It can also be used to pick up on pre-existing health conditions, allow shoppers to try on clothes virtually, and give brands a wealth of data so they can better target their adverts to certain demographics.
As the technology sees wider adoption and testing, the creeping question of its ethics is getting harder and harder to ignore. This month the EU decided to back out of a five-year blanket delay on European countries using the biometric software in public spaces, which means it is now up to each country to lay out their polices on how it can be used.
Whilst the issue of breaching privacy laws is a very real concern, the issue of accuracy is one which is often cited but rarely explained. This seems to be because, in part, the companies providing the technology are hesitant to reveal exactly how their solutions work for fear of giving away their “secret sauces” to the competition.
Related: Mastercard plans its first European cybersecurity centre
Despite authorities such as the European Commission (EC) saying it is prone to inaccuracy, it is still widely unknown as to exactly how. FinTech Futures explored the tech’s inaccuracies with Roger Grimes, an ex-Microsoft security architect who has been paid to hack biometric solutions throughout his career and is now writing a book entitled ‘How to hack multi-factor authentication’.
“As facial recognition gets widely deployed, they [the companies providing the software] have to make it intentionally less sensitive,” says Grimes. If the technology is too accurate, Grimes explains, then it will result in too many false negatives, i.e. people whose faces should but don’t match the technology.
If the biometric software holds a static copy of your face, it will be accurate in that specific point in time, but then it would continuously fail to match your real, ever-changing face with every passing day.
Therefore rolling-out the technology out en masse comes at the cost of making it less accurate, because no vendor wants to deploy an authenticator which takes an arduous amount of attempts to work, or doesn’t work at all.
“To have less false negatives, you have to be more false positive,” says Grimes, who has beaten one facial recognition technology simply with a picture of a face. “There are kids hacking it on YouTube,” he points out, adding: “Every ‘unhackable’ biometric technology gets hacked the next day by a 17-year-old teenager.”
The accuracy debate can take different forms. As well as being a security risk in the general sense, the software could also pave the way to significant discrimination. Without the right lighting conditions, darker skin colours cannot be detected in the same way that lighter skin colours can.
A 2018 study by MIT News and Stanford University found that Microsoft, IBM and Amazon’s facial recognition technologies carried skin-type and gender biases. Because some of the companies’ high accuracy rates of 97% were down to their poor data pools – which were mostly white and male – the study discovered that while light-skinned men had a 0.8% misidentification rate, darker-skinned women had a staggeringly higher 34% misidentification rate.
Read more: AI: Understanding bias and opportunities in financial services
Some facial recognition vendors deny having “a big issue” with discriminating the colour of someone’s skin. UK-based Veridium, which has just launched its new product vFace, tells FinTech Futures it sees a far bigger problem with religious clothing.
“There is definitely a cultural bias in this technology, but it’s largely religious clothing rather than the colour of your skin, which we didn’t find a big issue with,” says Veridium’s chief product officer, John Spencer, who believes women in the Middle East and Saudi Arabia will inevitably be excluded from the software as they come from cultures where women covering their face is the norm.
This is why Veridium is keeping its fingerprint biometric solution in full swing, finding that this software is far more universally reliable than facial recognition technology because it is globally recognised by national databases. But despite Grimes’ claims that he hacked a facial recognition product with a printed out photo, Spencer insists the software is far more “difficult to spoof” than passwords.
The accuracy of facial recognition as a form of authentication could also be subject to seasonal health alerts and even fashion trends. As the coronavirus spreads further across the world, more and more people are covering their faces with surgical masks to stop themselves from catching it. And long before this outbreak, masks have been common places in areas of the globe with high levels of pollution where respiratory problems are a genuine daily hazard.
Globally, toxic air will shorten lives by 20 months according to the non-profit Health Effects Institute. In East Asia, children lose some 23 months of their lives and in South Asia they lose more like 30 months. Even in countries such as the UK, numbers are beginning to emerge as to the death rate caused by air pollution. In some UK areas like Swindon, air pollution is responsible for more than 5% of all over 30 deaths according to Public Health England.
But people might not just be wearing masks because of a rapidly spreading global virus or dangerous levels of pollution. Surgical masks are fast becoming a fashion statement too. In Japan, a country with safer levels of air pollution than most of its Asian neighbours, its capital Tokyo houses boutique stores in Harajuku where you can buy printed and anime-themed masks.
Celebrities such as Ariana Grande, Katy Perry and Sofia Richie have all stepped out in masks, whilst rap duo Ayo and Teo wear various themed masks as part of their performance and say they have “boosted the mask economy”. In 2018, the fashion runways in London, Milan and Paris showcased “a new wave of face gear” from surgical masks and balaclavas to sculptural headwear.
China’s SenseTime, will be rolling out a facial-recognition product that incorporates a mask algorithm for building access control. It claims that its software can identify people even while they’re wearing masks with a “high accuracy,” in addition to flagging people who aren’t wearing the protective coverings and require them to wear a mask to gain access to a building, according to its press release.
It seems the accuracy of facial recognition is subject to a whole host of variables such as: levels of sensitivity, race and gender considerations, or even changes in health climates and fashion trends.
The lack of understanding clouding the technology and the refusal to accept ethical considerations made against it seem to come down to what Wendy Jephson, head of behavioural sciences at Nasdaq, said at FinovateEurope last week on the technology. “The reality is companies are getting to market fast. It’s a capitalist society, and people don’t ask enough questions, they just want to make money quickly”.
Read next: FCA: Innovative fintech entrants putting some customers at risk