Bias In Artificial Intelligence Exposes The Dangers Of Facial Recognition Technology

Bias In Artificial Intelligence Exposes The Dangers Of Facial Recognition Technology

Do you know how much time you have to make a first impression?

The answer is seven seconds. For some people of color, that opportunity is taken away due to faulty facial recognition technology. In this article, I will examine why, as well as the ramifications of it.

Artificial Intelligence & Facial Recognition Defined

Artificial Intelligence is the ability of computers to perform tasks that historically required human thinking, and, quite often, tasks that would require more time and money than it’s worth for humans to complete. Machine and deep learning fall under the umbrella of AI and is the source behind predictions, speech, and facial recognition.

Facial recognition, which involves matching faces to millions of images in a database, is one of those tasks that can be done much more quickly than humans. The nose, mouth, jaw, and eyebrows make up distinct points on a person’s face. The distance between points is called a nodal point (see image below). Facial recognition algorithms are trained to recognize and match individuals based on the patterns of these points.

Current uses of facial recognition

As with anything, sometimes good intentions go awry. Facial recognition software has revealed both the benefits and consequences of good intentions. China and Russia are using facial recognition software to track individuals that were supposed to remain quarantined during the COVID-19 pandemic. This same type of surveillance is used in the US to track children who’ve been kidnapped. You’ve personally witnessed facial recognition if you’ve seen your name appear in a picture on Facebook, or if you’ve used your face to unlock your smartphone. Unfortunately, some may have been a victim of it during a routine traffic stop. I will explain later why I use the word victim.

Examples of incorrect matches and the dangerous effects

Although Amazon is known for its accuracy when it comes to delivering packages, its accuracy didn’t ring as true of its facial recognition technology, Rekognition. In 2018, the ACLU conducted a test of the Rekognition technology. It inaccurately identified 28 members of Congress. People of color comprise only 20% of Congress but represented almost 40% of the false positive matches identified by Rekognition in the ACLU test. These members’ identities were matched to other people’s arrest photos. Matching people to arrest pictures is one of the ways police departments use Rekognition to help locate suspects and has been marketed as a tool to help identify potential threats during traffic stops. I used the word victim above because a false match during a traffic stop robs the person of their seven seconds to make a first impression and can cause unwarranted arrests.

It’s one thing to be misidentified in an ACLU study when the error has no effect, but it’s another when a grave mistake is made that can have lasting effects. In October 2018, a facial recognition system incorrectly matched a Black man, Robert Julian-Borchak Williams, to a video surveillance still image of someone shoplifting $3,800 worth of timepieces. As reported by the NY Times, the theft occurred in a specialty shop in a trendy neighborhood in Detroit, and this incorrect match led to the arrest of the wrong man. One thousand dollars and 30 hours later, he was released. This mistake will have lasting effects on his mental state of mind, as well as on his young daughter, who watched her father mistakenly taken away in handcuffs.

In 2015, CNN Business reported, the Google Photo app labeled two African Americans as gorillas. One of which was Jacky Alcine, who went public with the mistake when he tweeted it in a photo. Google apologized and has since removed the gorillas as a category, so it can no longer be used as a label.

As with all machine learning, the output is only as good as the training data provided. Google said it would get better at categorizing photos as more images are loaded, and as more people correct the tags.

Data has proven that companies that are more diverse yield higher profits. A more diverse group of data scientists would be more likely to ensure better representation of Black people in the training data. Hiring managers must view people of color to be as much of an asset as everyone else.

Corporations response to systems susceptible to errors

Amazon denied there was a defect in their technology in 2018, and instead blamed it on the ACLU’s testing process. In June of 2020, Amazon reported in its blog post that they are halting the ability of police to use Rekognition, which will last for one year. They did not state that this decision was due to any errors in its system. The post stated they are hoping the year moratorium will allow Congress to develop better regulations around the use of facial recognition technology. The decision has come at a time when people are demanding better-policing practices for African Americans.

Amazon is not alone in its decision; companies such as IBM and Microsoft have pulled their technologies as well. These companies may be well known but are not the most significant players in the facial recognition software industry, and many of them have not banned police from using their technology. That means many other programs, with varying levels of accuracy, are still being supplied to police departments.

How inaccuracies occur

Machine learning algorithms rely on multiple data sets. Data sets are data trained (also called training data) to correctly identify a specific person, animal or object, (called an output). From that training data, it then learns a model that can be applied to other people or objects and make predictions about what the correct outputs should be for them.

The source of biased algorithms comes from training data that is not representative of all races and often relies on information that was the result of systemic inequalities.

Inaccuracies are classified in multiple ways. One is a false-negative, and another is a false-positive. A false-negative occurs when the system returns a no, there are 0 matches to this person’s face in the database, when in fact the person’s facial image is in the database. For example, Jane is trying to unlock her smartphone, but can’t because it doesn’t recognize her as Jane.

Another inaccuracy is a false-positive. That occurs when the system mistakenly says, yes, this face matches someone in the database, or there is an X% chance this face matches someone in the database. When in fact, that persons’ face does not match anyone in the database. For example, someone else attempts to unlock Jane’s smartphone and can do so, although she is not Jane.

Unfortunately, data has proven that false-positive errors occur more frequently in people of color than in non-people of color.

Erroneous matches also occur when people have their photographs taken in poor quality lighting or at poor angles. As the saying goes, “garbage in, garbage out”.

The effect facial recognition will have on human bias

Technological biases seen in facial recognition will only exacerbate the biases that already exist within our country and create a wider divide between the police and people of color. Police practices and funding are currently under review. The time is now to persistently advocate for better regulation of the technology the police use.

AI will not replace the work of humans; it is a partnership. The mistaken arrest was not just the result of bad algorithms; it was a lack of thorough police work. Google’s mistake was poor oversight as well. Some people perceive AI as a threat to jobs; it should instead become regarded as a system designed to make its human partners more efficient and effective.

Many have said we must eliminate bias to avoid passing it to future generations. You must also realize; accurate AI depends on its elimination as well. A lack of diversity in its planning, testing, and implementation phases expose people to danger that could leave impressions that last a lifetime.

This Post Has 3 Comments

  1. Greg

    I found this article to be well written, informative, and timely. Thanks for the insights and thanks for writing the article.

  2. Cathy McKeithan

    This is awesome! I didn’t realize how incorrect this facial recognition software could be, this article was very informative. All black people do not look alike and this shows why so many black men are incorrectly arrested and accused of crimes not committed! This type of technology has to be precise! Thank you for sharing. You did a great job!

    1. Cathy,
      Thank you for your message. I really appreciate your feedback. I am going to send you a request to connect on LinkedIn and would be very gratefulfor your acceptance.

Leave a Reply