Brandi Collins-Dexter, Senior Campaign Director at Color of Change
Surveillance weapons are often deployed in our communities without our knowledge, will or so much as a warrant. Stingrays, license plate readers and drones are just some of them. I live in Baltimore, which is ground zero for new surveillance tools before they're rolled out nationwide; not only is there a police helicopter flying over our heads constantly, they just started what's known locally as a "spy plane" program. Facial recognition specifically is alarmingly normalized in everyday life, whether it's at your local DMV, grocery store or Black Lives Matter protest.
If an eyewitness on the stand said, "I guess it's probably the defendant, but I don't really know for sure; can you put him in different lighting?" many of us would not convict based on that. Yet everyday, people are convicted based on facial recognition identification, and the tool accuracy is even less certain. Facial recognition, like many surveillance tools, is deeply flawed, built with racial bias and is often deployed with impunity. It does not make us safer, just more targeted.
At Color Of Change, we believe facial recognition should be banned full stop and removed from government buildings like the DMV. There should be a full public disclosure of all police and law enforcement agencies that have the technology and the company that created the tool. Finally, there should be mandatory disclosures for when it was used to bring charges against someone, and defendants should be given full access to the algorithmic data and coding.
Susan Etlinger, Senior Analyst at Altimeter
The people to ask are the scholars and scientists, mostly Black, mostly women, who have pioneered this field of study, and who are most qualified to lead these conversations. Ruha Benjamin, Joy Buolamwini, Renée Cummings, Timnit Gebru, Yeshimabeit Milner, Mutale Nkonde, Safiya Noble, among many others, have made it abundantly clear that facial recognition is rife with potential harms, especially for vulnerable and marginalized populations and particularly when it is used by law enforcement. So the first thing lawmakers and companies need to do is get the right people at the table.
Assuming the right people are in the room, there are a number of factors to consider:
- Is the system based on sufficient, high-quality data? Where did the data come from? Is it auditable? How long is it stored for, how is it secured, and who else has access to it?
- Does it even work? It's been well established that facial recognition tends to incorrectly identify people with darker skin tones more often than people with lighter skin.
- Does it explain how it came to a decision? Are those factors disclosed (for example, in a court of law)? Do people have the ability to defend themselves against an algorithm? Some companies will say it's their intellectual property, so … no.
- How is it being used, and what's the potential for harm? It's one thing to use it to try to locate a missing child; it's another entirely to use it to justify an arrest or put it on a drone to identify protesters at a demonstration.
- Is it being used correctly and with proper oversight? In one case, a police department ran a photo of actor Woody Harrelson through a facial recognition search program, noting that the actor resembled the suspect. There have also been cases of law enforcement agencies that used artist sketches to search for matches, or edited or blurred photos to swap out physical attributes to search against.
- How much weight is given to facial recognition in decision-making? Is it the primary method of identification, or is it used in context of more reliable and discoverable evidence?
These examples are just a starting point. But before we worry too much about quelling fears about facial recognition, we need to face those fears head-on. The way we build and use technology reveals who we are as a society, and it's long past time to take a good look in the mirror.
Renée Cummings, AI Ethicist, Criminologist, Criminal Psychologist
Facial Recognition Technologies (FRTs) make mistakes. The performance of FRTs varies with the persity of the population on which FRTs are applied, real world conditions differ from test conditions, and researchers have highlighted the distribution of error across different demographics. It is about trust. For there to be trust, there must be accountability, transparency, explainability and equity. The scorecard of FRTs shows a dangerous deficit.
FRTs, like policing, is now in a precarious position when it comes to public trust and public confidence. The unethical application of FRTs, by the police, government, or the private sector is of great public concern. Decisions to discontinue its development or place a moratorium on its deployment, by some big tech companies, further erode the trustworthiness of FRTs and raise grave concerns of the motives behind its design. The reputation of FRTs has also taken a severe blow.
FRTs have long been perceived as digital expressions of structural racism and forms of digital profiling, digital marginalization, and digital victimization. As calls are made to defund the police and radically reimagine policing, communities of color, and informed citizens, will not accept excessive, intense, and intrusive surveillance. There's evidence of the disproportionate impacts of FRTs on communities of color and the potential for the misuse of FRTs in predictive policing, police investigations and many other situations where privacy and security could be compromised.
The fact remains that FRTs are in use. If some applications cannot be banned, stringent legislation is required to regulate FRTs to ensure civil liberties and civil rights are protected, robust and rigorous ethical frameworks must be enforced, and community oversight must protect the public interest.
Brian Brackeen, General Partner at Lightship Capital; Founder of Facial Recognition Firm, Kairos
While running Kairos, I took the position that facial recognition vendors should be either commercial or government facial recognition vendors. If you work for commercial firms, they don't have databases of everyone's images, and therefore can't simply match an image of you with a random image.
Governments have all of our driver's license images and passport photos and can match us all. Their vendors should be considered public-serving entities, like a utility, and be regulated as such. Benchmarks could be created to eliminate bias and other issues with facial recognition systems that serve government.
This delineation would, I believe, create some additional comfort for the public and privacy groups. At the least it would create greater transparency.
Riana Pfefferkorn, Associate Director of Surveillance and Cybersecurity at the Stanford Center for Internet and Society
My primary worry about corporate information-gathering on inpiduals has always been about governments' ability to access the information the companies collect. This is especially a concern with facial recognition data, given the severe privacy ramifications of putting a pervasive surveillance system in law enforcement's hands. When tech companies say they won't sell their FR tech to law enforcement, that sounds nice, but it means nothing unless there are strict controls on "downstream" access by law enforcement. If IBM refuses to sell to a police department, but does sell to some middleman customer that then turns around and sells the FR tech to the police, that defeats the whole point.
We saw this "middleman" problem in 2016, when Twitter gave access to user data to a company called Geofeedia. Geofeedia sold a social media surveillance system to law enforcement, despite Twitter's rules against the use of user data for surveillance. Twitter cut off Geofeedia once this abuse was discovered, but it illustrates how companies can be blind to downstream uses that are nominally prohibited. Cambridge Analytica is another good example. Those instances show that the middleman issue is a hard nut to crack.
Any company vowing not to sell FR tech to governments needs to explain how they're going to prevent these issues from happening and, if they fail in prevention, how they think they could remediate the damage after the fact. If companies don't come up with satisfactory answers to these questions, well, maybe they need to revisit whether to work on FR tech at all.
Emmanuel Schalit, CEO at Dashlane
Like fingerprints, facial recognition is one of many biometric technologies that carries a huge risk: that if your information is stolen, it is compromised forever. That makes the concept of a centralized store of biometric information inherently dangerous for society. You can change a compromised password; you cannot change your face or your fingerprint. The problem isn't the tech, it's the data — and who controls it.
However, unlike fingerprints, facial recognition can be applied from a distance, across a crowd or without the inpidual being aware that they are authenticated, opening up the door to mass surveillance scenarios that are far from science fiction in authoritarian countries today.
When we see some companies pausing or canceling the sale of the technology to government and police, it shows there is a legislative void concerning enough for them to forgo their potential revenues. Yet, with Palantir's software already being deployed to an undisclosed number of police forces across the United States, the question is not simply whether we must prevent the use of facial recognition by law enforcement, it's how to limit its damage and protect the rights of citizens.
The most urgent action — taken already by the EU — is for governments to clearly define the limits of the technology for itself, for its police forces and for private entities. People should know who knows their face.
Until lawmakers codify protections against the misuse of data acquired through facial recognition, such as using it as basis for issuing warrants or admission as evidence in a court of law, corporations must set the standard by defending employees' right to privacy. CEOs should be clear that they will not allow data acquired through the means of facial recognition to be the new pre-hire "checking your Facebook page." There can be zero tolerance for misuse of personal data. While there may indeed be value in this innovation, we will only be able to reap its benefit if we refuse to allow any moral ambiguity in protecting the privacy of our colleagues.
Lourdes M. Turrecha, Founder at The Rise of Privacy Tech
Companies developing facial recognition technology must design and engineer them with privacy and ethics in mind. This means having a legitimate reason for collecting biometric and other personal information, being transparent about the uses of that information, and having state-of-the-art security controls in place. This also means having an appropriate retention plan, which includes deleting information that no longer serves any legitimate reason.
These companies must have a process in place for honoring inpidual data subject requests to access, delete, correct or port their data, which would go a long way toward quelling inpiduals' fears.
Lastly, these companies need to provide as much transparency as possible about their data processing, beyond a privacy policy's fine print. This could come in the form of just-in-time notification features built into the product or a publicly available privacy risk assessment report that details the risks that their facial recognition tech poses and the mitigating controls they put in place to address those risks.
These companies can choose to treat privacy as a competitive advantage instead of a mere compliance hurdle. Companies that fail to do so miss out on the market opportunities posed by the rise of privacy innovation.
Mark Surman, Executive Director at Mozilla Foundation
There are widespread, legitimate concerns about the accuracy of facial recognition technology. We should not be using biased technology to make decisions.
To quell fears and concerns about facial recognition technology, we need to improve the fundamental transparency and accountability around its use. Is there any way to publicly validate that facial recognition tools in use are effective for the situations in which they are deployed? Largely, no. Do people know when it is being used and for what purpose? Overwhelmingly, no. Is there a method of redress or accountability for the decisions made from facial recognition technology? Again, No.
When we have more transparency into how facial recognition is deployed and developed, it opens the door to further interventions — from boycotts to regulations.
Source: protocol