Can Facial Recognition Overcome Its Racial Bias?
The dystopian surveillance state of science fiction media is within reach鈥攁nd some privacy activists argue that it鈥檚 already here. Facial recognition advancements have spiked fear and uncertainty over misuse and civil liberties infringements, but with the alarm comes a wave of activists bringing solutions.
What is facial recognition?
Facial recognition is a form of artificial intelligence. Artificial intelligence broadly refers to the development of computers to perform tasks that would normally require human intelligence. If you have an email, you are indebted to AI for directing spam to a separate folder instead of flooding your inbox鈥攖he computer learned to recognize the pattern of spam and filter it accordingly. If you have a YouTube account or music streaming service, your personalized recommendations are the product of an AI algorithm.
For facial recognition, algorithms are written to measure the geometry of someone鈥檚 face, compare those unique measurements to a database of faces, and return potential matches with varying degrees of certainty.
Facial recognition can offer convenience, such as the millions of people who use their face to unlock their iPhones. It can also be used as a surveillance tool, like the , a largely Muslim minority the government has been sending to detention camps.
As AI embeds itself into daily technological life, privacy activists and technology enthusiasts agree that the powerful tool is here to stay. But, the implementation of the technology has complex problems, as well as disagreements about which solutions are actually solutions.
Resolving the bias in the technology
The current facial recognition technology available has racial and gender bias. A 2018 study by MIT found that while determining gender using three different facial recognition programs, the .
In 2018, Amazon鈥檚 facial recognition tool, Amazon Rekognition, . While comparing the lawmaker鈥檚 faces against mugshot databases, Rekognition misidentified lawmakers of color at a rate of 39% even though they only made up 20% of the people.
Brian Brackeen, founder of facial recognition technology company Kairos, says that it鈥檚 possible to train the bias out of the technology.
How?
Facial recognition algorithms improve by being supplied large datasets of faces. Existing facial recognition products work well on 鈥減ale males鈥 because the algorithms were supplied datasets of majority White men, reflective of the tech industry itself. An MIT and Stanford University study found that a . Brackeen says this bias toward the people who make the technology is common; similarly, algorithms in Asia tend to perform well on Asian males and not as well on White males.
To solve the bias, the algorithms need to be supplied a more diverse set of faces to learn from, but high-quality datasets of people of color aren鈥檛 always available. For example, cellphone photos of dark-skinned people are unreliable because the light sensors are inconsistent and rarely accurately reflect the person鈥檚 skin tone, Brackeen says. Additionally, many companies struggle to find enough high-resolution photos that can be consensually added to training datasets鈥 because it was full of faces scraped from the web and videos under the Creative Commons license.
That鈥檚 why Brackeen supports the use of Generative Adversarial Networks to train facial recognition algorithms. GANs are able to create computer-generated faces made from a composite of photos of real people. The and can be generated to meet specifications such as age, race, gender, and photo quality.
鈥淲e can create a million Black men, a million Asian men, a million whatever, and just load the AI with those generated people,鈥 Brackeen says. Using that method, he believes 鈥渢here will be no bias in facial recognition AI three years from now.鈥
But even if facial recognition AI becomes a completely accurate tool, Brackeen believes the technology should never be used by law enforcement. He sees facial recognition as a way to boost convenience. For example, being used as a way to checkout at the grocery store faster鈥攖hink Amazon Go stores鈥攐r being implemented at Disneyland to speed up the wait times on buying passes or getting on rides. In those instances, the consumer would opt-in to using facial recognition and the consequences of the technology failing would be minor annoyances, such as a longer checkout process or slower lines to get on rides.
Government use of the technology
In the hands of law enforcement, however, the tool could prove dangerous. If someone who is pulled over for speeding is misidentified as being on a dangerous persons database via an officer鈥檚 body cam feed, that situation could turn fatal.
鈥淭hese impacts are too great,鈥 Brackeen says.
It鈥檚 unclear how many law enforcement departments have access to facial recognition technology and how those departments that do have access use the tool. The Georgetown Law Center on Privacy and Technology was the first group to present a . The team submitted public records requests to more than 100 law enforcement departments, finding that there was no regulation on the facial recognition technology that .
鈥淲e ourselves were surprised by some of the findings we had, mostly in terms of the sheer scope of use of facial recognition at the state and local level across the country,鈥 says Clare Garvie, a senior associate with the Georgetown Law Center on Privacy and Technology. 鈥淎lso, [we were surprised] at the complete absence of any legal imposed regulations or even policies implemented by law enforcement to constrain its use.鈥
Garvie鈥檚 work revealed that at least 52 police departments of the 100 queried have access to, use, or have previously used facial recognition technology. Several large police departments, including those in Los Angeles, Chicago, and Dallas, either had access to or were exploring the use of real-time facial recognition that can continuously scan pedestrian鈥檚 faces and compare them against databases such as mugshots or driver鈥檚 licenses.
Additionally, Georgetown鈥檚 research found that because of the lack of policy and limitations on how the technology is used, and earnestly using the output.
For example, in 2017, a pixelated image from a surveillance camera of a suspect stealing beer from a CVS returned no matches when run through the NYPD鈥檚 facial recognition technology. Someone mentioned that the suspect resembled actor Woody Harrelson, so a picture of Harrelson was ran through the program and the detectives picked someone who they believed looked like the suspect from the resulting matches. That celebrity 鈥渕atch鈥 was sent back to the investigating officers, and an arrest was made.
The stakes are too high in criminal investigations for the police to be using probe photos and doppelg盲ngers, Garvie says.
Additionally, because most facial recognition programs used by law enforcement search mugshot databases, people of color, particularly young Black men, are overrepresented in the possible matches. Even if the technology鈥檚 racial bias problem is solved, the use of facial recognition within the existing criminal justice system could just replicate the over-policing of Black and Brown communities.
鈥淚t鈥檚 not appropriate to ask individuals to affirmatively fight for our right to privacy against government intrusion,鈥 Garvie says. 鈥淭hat said, I think the single most important role individuals can play is demanding transparency and accountability from their local and state officials, and pushing their legislators for the legislation and regulation that they deem appropriate in this field.鈥
Garvie advocates for a moratorium, or a temporary halt, on government鈥檚 use of facial recognition technology while researchers and lawmakers have time to catch up and consider the impacts of this fast-moving technology. San Francisco went a step further last May, approving a .
Immediately after the ordinance was put in place, city employees scrambled to remove the face unlock feature from their government-issued iPhones. The language in the ordinance was then amended to allow non-surveillance facial recognition鈥攍ike face unlock on the iPhone鈥攖o be used, but Garvie highlights it as an example of why it鈥檚 so difficult to develop legislation that anticipates for the future without being too broad and cutting off AI uses that are more helpful than hurtful.
鈥淚t鈥檚 very challenging to think about legislation that regulates exactly what you want it to, that doesn鈥檛 leave loopholes but isn鈥檛 also overinclusive,鈥 Garvie says. 鈥淚t鈥檚 a challenge, and it鈥檚 never going to be perfect.鈥
How the ban will affect the San Francisco Police Department鈥檚 criminal investigations, though, is uncertain. While the SFPD did not respond to 猫咪社区!’s request for comment, the Georgetown Law report found the and could search a half-million to a million mug shots.
The San Francisco Sheriff鈥檚 Department has more autonomy under California state law. Nancy Crowley, the director of communications for the sheriff’s department, says the ordinance has not impacted the department’s operations and that the department may still use facial recognition within the county jails.
鈥淚 don鈥檛 know to what extent we use it,鈥 Crowley says.
Activists such as Evan Greer, director of operations at Fight for the Future, believe the San Francisco ban is a step in the right direction, but say other government efforts to curtail facial recognition use are insufficient.
鈥淲e don鈥檛 think there are meaningful limitations you can put on this technology,鈥 says Greer.
Greer believes that while developing the language around legislative regulation may be tricky, an outright ban on facial recognition surveillance by government, law enforcement, and private corporations sidesteps the concern for future-proofing and overinclusion.
Fight for the Future, a nonprofit grassroots advocacy group, has built a , making headlines when they got at the festivals. The group has since expanded to organizing on college campuses and supporting movements by acting as an .
Greer and Fight for the Future鈥檚 focus is on slowing down the rapid adoption of facial recognition and giving lawmakers time to do their jobs鈥攑ut a policy in place that regulates its use. In the meantime, Greer鈥檚 opposition to facial recognition surveillance is not quelled by the removal of bias and increasing accuracy of the technology.
鈥淚f we have systems in place that make it possible to enforce laws 100% of the time, then there is no space for us to test whether those laws are just,鈥 Greer says, highlighting civil rights, LGBTQ rights, and the legalization of marijuana as examples. 鈥淔or me, at a philosophical level, privacy isn鈥檛 about what you have to hide, it鈥檚 about our ability to evolve as a human society.鈥
Increasing the accessibility of advocacy
Because facial recognition is developed by small groups of people within the tech field, the technology operates within a black box. Yet, its impact is largely on the public, who can鈥檛 see inside the black box and who don鈥檛 have the information to fully understand its impact.
The end of the for legislatures, law enforcement, facial recognition companies, and community leaders. While suggestions for law enforcement and lawmakers are pretty standard鈥攕uch as officers must have probable cause to run a photo through facial recognition鈥攖he section for community leaders is unique because it provides individuals with the questions they need to ask to get information from their police department. Questions such as: Who is enrolled in the police face recognition database? What legal requirements must be met before officers run a face recognition search? How does the agency鈥檚 face recognition use policy protect free speech?
These questions reveal the need to involve people at every level. The Georgetown report reflects only 100 of the 18,000 police departments across the country. Garvie believes getting more people prepared to advocate for themselves is critical to uncovering what we still don鈥檛 know about law enforcement鈥檚 use of facial recognition.
Preparing people to advocate for themselves is also why Brooklyn-based artist, researcher, and technologist Mimi Onuoha teamed up with Mother Cyborg, aka Detroit-based artist, DJ, and educator Diana Nucera, to make an .
The two met at a conference about the future of AI, engaging in conversation surrounding the ethics and potential pitfalls of AI. They bonded over how they were going to share this information with their communities. In Detroit, where Nucera lives, 40% of people don鈥檛 have access to the internet, so how is she to engage her community in complex AI topics when they are still working on getting them online?
鈥淚t was clear to me that we were in this room of pretty privileged folks in the field, talking about these things that really my neighbors should be talking about,鈥 Nucera says.
The two collaborated to distill the most essential concepts of AI down to a simple format that could be used for self-teaching, or as a tool for a teacher to use in the classroom. The zine not only gives communities the tools to understand and talk about AI, but also to foster conversation around the growing tech that isn鈥檛 dominated by a fatalist point of view.
鈥淭here must be a way to talk about this that doesn鈥檛 end in this strange 鈥榬obots are going to kill us鈥 way,鈥 Onuoha says. 鈥淲e know that there are alternate conversations we can have.鈥
The 84-page zine engages in that alternate conversation by including exercises and prompts that demystify algorithm building by having the reader identify algorithms in their own lives. The format is digestible, building upon a basis of knowledge to get to the more difficult ideas, like whether algorithms can be used to quantify and judge people in an ethical way.
The booklet sold out of physical copies twice, and Onuoha and Nucera continue to run workshops for non-tech and tech people alike, leading critical discussion on what regulation and data collection should look like and who is being served or harmed by the use of AI and facial recognition.
Ultimately, the zine provides enough information to act as a launchpad to engage in those critical conversations. Both Onuoha and Nucera want to make sure that their neighbors, mail carriers, and anyone else they may see in their community have the language and information they need to participate in the discussion about AI surveillance being implemented in their community.
鈥淭hey are the only ones who can say how it鈥檚 impacting them and whether or not they want it,鈥 Nucera says. 鈥淭he rest of it is just assumption. So how do you get people to join those conversations? They鈥檝e got to learn the talk.鈥
Isabella Garcia
is a former solutions reporter and former editorial intern for 猫咪社区! Media. Her work has appeared in The Malheur Enterprise and 猫咪社区! Magazine. Isabella is based in Portland. She can be reached at isabellagarcia.website.
|