Srivastava: Amazon pushing facial recognition technology to police only empowers racist institutions

Heena Srivastava, Columnist

Facial recognition technology has been developing since the 1960s and has taken off over the last 20 years. From tagging people on Facebook to searching faces in iCloud’s photo library, the technology adds another dynamic to how we interact with images. Now, developers are looking to sell it to law enforcement.

In 2016, Amazon developed a facial recognition program called Rekognition. On Tuesday, the American Civil Liberties Union of California released a series of documents detailing how the company has been pushing its technology on police departments. Rekognition is partnering with the Orlando Police Department in Florida and the Washington County Sheriff’s Office in Oregon; however, the departments have expressed concern over the program’s accuracy.

While this technology has revolutionized social media and can dramatically improve the law enforcement arena, we need to consider how it affects communities of color.

In 2015, software developer Jacky Alciné tweeted a screenshot of Google Photos identifying his African-American friends as gorillas. In a study conducted by M.I.T Media Lab earlier this year, facial recognition algorithms from Microsoft, IBM and Face++ are more likely to falsely identify darker faces than lighter ones. While gender was misidentified for only 1 percent of lighter-skinned males and 7 percent of lighter-skinned females, gender was misidentified for 12 percent of darker-skinned males and 35 percent of darker-skinned females.

Study leader Joy Buolamwini took interest in the topic while testing a facial recognition program as an undergraduate at the Georgia Institute of Technology. It would recognize her white friends’ faces but would not pick up on hers at all. Only when she put on a white mask would the program recognize her as a person.

This isn’t the only piece of tech to be crafted for white people by white people. In 2015, a viral video showed a hotel soap dispenser that worked for lighter skin tones and not darker ones. The sensor emits infrared light that, when in contact with light-colored objects, bounces back and triggers the dispenser. If the object is too dark, it just absorbs the infrared rays. Also in 2015, Vox released a video explaining the invention of colored film. Scientists determined and calibrated film chemical composition using white female models, disregarding combinations that would accurately portray darker tones. As a result, the photos were less flattering for people of darker tones, who appeared less prominently compared to their white counterparts. Because the tech field is dominated by white people, technology is being crafted without sufficiently considering people of color.

While photographic technology has made up for these errors, recognition software still has a long way to go. If facial recognition technology is not evolved enough to not be racist, it is certainly not ready to be utilized by our nation’s police departments.

Police intend to use the technology to identify high-risk individuals in crowds. Rekognition project manager Ranju Das explained the program at an Amazon Web Services conference in Seoul. Using Orlando as an example, he described how the technology would analyze footage from cameras across the city. “We analyze that data in real time and search against the collection of faces that they have,” he said. “Maybe they want to know if the mayor of the city is in a place, or there are persons of interest they want to track.”

If the technology cannot accurately identify individuals close up, however, there is no reason it should be used to recognize them among the masses. This can result in innocent people of color being marked as high-risk and gives marginalized populations even more of a reason to fear authority.

Institutional racism continues to find its home in our nation’s police departments. According to Mapping Police Violence, black people are three times more likely to be killed by police than white people. Since Trayvon Martin’s murder in 2012, police brutality has been at the forefront of civil rights movements.

The Anti-Defamation League began to conduct diversity training for law enforcement agencies across the nation, including the FBI, the Drug Enforcement Agency, the New York State Police and the Houston Police Department. Activist groups have continued initiatives to tear down practices that further institutional bias. However, the widespread use of underdeveloped facial recognition technology will only reinstill the systematic racism they have worked so hard to fight.

Facial recognition technology can be influential in ensuring public safety from convicted offenders and high-risk individuals. However, this powerful technology comes with great risks to marginalized populations and should only be fully utilized after thorough accuracy testing. While it can be used to do good, without proper vetting, it can be a weapon to perpetuate discriminatory institutions.

Heena Srivastava is a Medill freshman. She can be contacted at [email protected]. If you would like to respond publicly to this column, send a Letter to the Editor to [email protected]. The views expressed in this piece do not necessarily reflect the views of all staff members of The Daily Northwestern.