Facial Recognition

Juan Shogren Tutiven, Writer

In today’s modern world, facial recognition is increasingly being used by governments around the world. From the top-down, the technology is being used to monitor innocent civilians of all walks of life, no matter one’s gender, race, or social status. However, the technology is quite new. How can we, the innocent civilians, be sure that the technology works accurately and properly the vast majority of the time?

The answer, quite simply, is we can’t. When Apple came out with their faceID, it was plagued with problems by minority users whose phones recognized others of the same race as well as their own face, as expounded in this article by the mirror.co.uk.

Facial recognition, in general, has huge issues with properly selecting race. Steve Lohr, from the New York Times, details this in his article “Facial Recognition is Accurate if You’re a White Guy”. According to him, if you are a white male, facial recognition has an error rate of 1% or less. However, the number of errors went up when darker-skinned females were part of the mix, with errors being in the 35% range.

Part of the issue is that the data used to train the software is predominately images of white males. This should be an easy fix, but most facial recognition technology still suffers from it, whether through negligence by the software developers themselves or through improper testing of the software itself. Cost could also potentially be an issue, since retraining the software from scratch can cost anywhere from thousands to millions of dollars, with no guarantee that the software will “learn” the way that they desire.

But while this is a massive issue in and of itself, an arguably bigger issue is how this inaccurate data is used. This means that people of color are at a greater risk of being unfairly singled out. For example, since many government agencies often rely on facial recognition for some amount of policing duty, inaccurate facial recognition technology can result in minorities being erroneously pinned for crimes they didn’t commit.

This impacts the reputations of the individual accused but also has a whole swathe of other negative impacts. It destroys the reputation of the police in minority communities, making those communities less likely to call upon their local police in times of need, and driving a wedge between them.

It can also fuel racist rhetoric, such as the oft-cited statistic that 13% of the US population commits almost 50% of the crime, referring to African Americans. This statistic is often used by websites like Stormfront, a white supremacy website, to justify anything from segregation to lynching, reducing African Americans to beings little more than animals. This of course is horrendous, and in the near future inaccurate facial recognition can add fuel to the fire, halting or reversing decades of progressive law efforts.

All of these are horrendous issues with facial recognition technology, and this is not even taking into account deliberate misuse of the technology, such as China’s use of the technology to identify ethnic Muslim minorities in order to ship them off to concentration camps.

We the people should righteously be wary of this technology and those who use it, and should call for legislation that protects citizens from being harmed by this useful, yet dangerous technology.