Facial recognition applications are increasingly widespread and include everything from mobile phone access to video surveillance or criminal investigation applications. But to what extent is this technology reliable? What hidden problems can we find in its use? And, above all, what can we do to minimize or even neutralize the impact of these problems?

Facial recognition seemed exclusive to humans and impossible for machines, but today it is a problem that has already been solved thanks to the recent evolution of deep learning neural networks and, in general, artificial intelligence. Thanks to them, the classic paradigm of detection, feature extraction and classification has become obsolete.

It is no longer necessary to adjust dozens of parameters and mix different algorithms. Now networks simply learn from the data we feed them, which is conveniently tagged.

Many successes, but also errors

The results are spectacular in terms of the correct recognition ratios that are achieved with neural networks. The phone always recognizes you, biometric access to your company never fails, surveillance cameras always end up detecting the suspect.

Or maybe this is not always the case?

Robert Julian Borchak Williams I probably wouldn’t agree. This African-American citizen has the dubious honor of being the first to have been arrested because a facial recognition algorithm incorrectly identified him.

That this first mistake was committed with an African-American does not seem to be a coincidence. Although the case of Julian-Borchak Williams occurred in 2020, already in 2018 the investigator Joy Buolamwini published a study where he showed that facial recognition systems had difficulty identifying dark-skinned women. Her work transcended the general public through the documentary Coded Bias (coded bias).

Because yes, it seems that women have also suffered discrimination from artificial intelligence. The famous amazon algorithm that it discriminated against CVs where the word “woman” appeared is its clearest exponent. Fortunately, it stopped being used when verifying this sexist tendency.

The bias keeps showing up

To test whether it is possible to find biases in facial recognition models, in a recent experiment three groups of students were asked to (independently) analyze the performance of different models. The models examined were those used in the bookstore deepface.

The evaluation aimed to choose the best model in terms of recognition percentage. In this case, the results obtained coincided approximately with those presented by the authors of the models. The few detected failures usually involved women and also dark-skinned people. Notably, face detection on people of color also failed on some models. It’s not that it didn’t recognize itself, but that the model didn’t even detect a face.

These models can also be used as estimators of gender, age, and ethnicity. For this experiment the model used was VGG-Face. The gender estimation worked quite well with European people, but not so well with Asian or African American people. The most common mistake was confusing women with men when they were from those ethnicities. The rest of estimators (age, ethnicity) did not work well. The division by ethnicity became clear that it was quite artificial and subject to multiple errors.

These results should not make us believe that this technology is useless. Actually, their recognition ability is superior to human in many cases. And, of course, at a speed unattainable by any human being.

What can we do?

From our point of view, it is important to look at the possibilities that artificial intelligence has as a tool and not underestimate its use when we detect problems like the ones shown here. The good news is that, once problems are detected, initiatives arise and studies are carried out to improve its use.

The biases in the models appear for multiple reasons: a bad choice of the data, a bad labeling of the same, human intervention in the process of creating and choosing models, a bad interpretation of the results.

Artificial intelligence, considered a technological advance without prejudice, becomes a faithful reflection of one’s own biases and the inequalities of the society in which it develops. As well concluded in this interesting article“it can be an opportunity to rebuild ourselves and not only achieve algorithms without bias, but also a more just and fraternal society.”

We have the technical tools to achieve it. Developers can find ways to test and improve their models. The initiative AI Fairness 360 is an example of this. But, perhaps, the most sensible thing to do is to use common sense and use artificial intelligence intelligently.

An example of the latter can be found in this study, where it is concluded that the best option to recognize people with guarantees is for humans and machines to collaborate. And also the approach of the Spanish National Police to make use of the ABIS system of facial recognition: “It is always a person, and not the computer, who determines if there is a resemblance or not.”

Hilario Gomez Moreno, University professor. Signal Theory and Communications, University of Alcala; Georgiana BogdanellComputer Forensic Analyst at BDO Spain and PhD candidate in Forensic Sciences, University of Alcala and Nadia Belghazi MohamedResearch Staff and Doctoral Student in Forensic Sciences, University of Alcala

This article was originally published on The Conversation. read the original.

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply