No, Not Every Technology That Uses Your Face Is Facial Recognition. It’s Time We Started To Get Precise
 /  Blog Post / No, Not Every Technology That Uses Your Face Is Facial Recognition. It’s Time We Started To Get Precise
No, Not Every Technology That Uses Your Face Is Facial Recognition. It’s Time We Started To Get Precise

No, Not Every Technology That Uses Your Face Is Facial Recognition. It’s Time We Started To Get Precise

Key Words: Facial Recognition, National Security, Policing, Privacy, Technology

 

Last month, US airline JetBlue came under scrutiny for the facial recognition technology they have deployed for check-in purposes since November of last year, after a passenger started asking questions on social media about the program. JetBlue responded that this particular program is rolled out in cooperation with the United States Department of Homeland Security (DHS) and that the company does not actually store passengers’ data. Nevertheless, the questions the passenger asked are valid, especially in light of plans of the current US government to expedite the expansion of this program: who has access to what information? Are opt-out procedures clear? Questionable in this case. And perhaps most importantly of all, what exactly is facial recognition?

Facial recognition technology is deployed everywhere to monitor your every move. Or so the story often goes. Unless you live in China, that’s not exactly true. And if we are to have an effective debate about what we, as a society, should deem acceptable, it’s time we start to get a little more precise about what facial recognition truly is.

Facial recognition technology has gotten better over the years, and cheaper to deploy. That also explains why a lot has been written about it, about bias in its underlying algorithms, and about the threat it poses to privacy. However, not all technology is created equal and not all of it is capable of the same things. Generally speaking, if you were planning to buy a car, you wouldn’t compare the station wagons you are looking into with a Formula 1 race car. The same applies to facial recognition technology. And the algorithms used to unlock your phone using your face are not the exact same as the ones used by law enforcement and security agencies to pick a wanted individual out of a crowd.


Detecting Faces

So what then is the difference? At the most basic level there is technology that detects faces. Basically the software does nothing more than distinguish between a face and not-a-face. This technology can help you take better selfies, by automatically focussing on the face, for example. It can also be extended with algorithms that claim to be able to distinguish emotions. Especially those working in marketing or advertisement can make good use of this type of technology to record how someone feels upon seeing their product or advertisement. A particularly poignant campaign against domestic violence, for example, used face detection technology to see how many people looked at a digital billboard of a battered woman. The more people looked, the fewer her bruises became. The message of the campaign: don’t look away.


Comparing Faces

At a more sophisticated level the technology compares faces and quite often in a live setting. You might unlock your phone by looking into the camera. Your phone will then compare the image it captures at that moment with one already saved from when you opted into the program. The same happens when you use an e-gate at the airport. Biometric passports already have your picture saved on the chip embedded in the passport and on the photo page, which they compare to the picture the camera takes of you in the e-gate. Other applications include access control to laboratories (especially those with hazardous materials), restricted (military) areas, and research and development (R&D) facilities.

You might think that this technology is widely used to solve missing persons cases or that it is used in court to prove who did it. But that is not the case. There are courts and forensic institutions which doubt the reliability of the technology. The Netherlands Forensic Institute (NFI), for example, analyses digital images and provides expert witness testimonies to court. They do not believe that the technology as it is currently developed can accurately assess whether a suspect in a photograph is the same person as the one appearing in the CCTV recording taken from a store that was robbed. Much of that also has to do with the quality of most of the images they receive, which is why in the e-gate you are required to stand still, look straight forward, and your face is well-lit and unobstructed. Real life, however, does not give investigators such advantages and therefore the NFI uses experts that undergo regular training to determine if the people appearing in two images is the same person. And they leave nothing to chance, because three independent experts need to come to the same conclusion for their testimony to be considered conclusive.


Facial Recognition

Arguably the most invasive use of this type of technology is what most people call facial recognition. This uses cameras which are equipped with facial recognition software, which includes access to a database. The database is often overlooked in public debate on the use of this technology but it is perhaps the most important part of the system. It really depends on where you live, who can access what databases, and for what purposes. In the European Union, the legislation on whose images and data may be stored in databases is rather strict, as is the length of time for which it may be stored. This legislation is more relaxed, generally speaking, in the United States. The most extreme case is probably China, where photos and personal data of large parts of the population are stored in state databases that are connected to countless facial recognition capable cameras.

Nevertheless, this technology can serve legitimate purposes, provided that it is deployed within the context of the law. Since the use of facial recognition technology can interfere with the enjoyment of basic human rights, including the right to privacy, it is important that any deployment of this type of technology is done for a legitimate purpose. Furthermore, in liberal democracies, determining whether facial recognition is the right tool for the job is also dependent on whether its deployment is necessary (i.e. is it the only or most feasible way of achieving the goal) and proportionate (i.e. is the interference the deployment of this technology can cause in proportion to the predicted gains from the use of facial recognition).

For example, facial recognition technology deployed at the entrance of a stadium and direct surroundings could fit the bill, since it may be used to deny entrance to people who have a court ordered stadium ban. In that case there is a precise and legitimate purpose, the database used would entail those with a court ordered ban, and the technology would only be deployed in the direct vicinity of the stadium. That is not a blanket approval for the use of facial recognition technology in and around all stadiums, but it illustrates that under specific circumstances its use can be legitimate and perhaps even preferred to more traditional methods of policing.

All the technology we use every day is becoming smarter and the future will see it even more ingrained in our lives than it already is. It is therefore important that we clarify what we are talking about. While facial recognition has been used as a catch-all phrase for any camera images that would in one way or another identify a face, there are very real differences in the way in which the technology is deployed and what the underlying algorithms are capable of. And if we want to have a constructive debate about how much we will allow this technology to play a role in our daily lives, we need to be precise about what we mean.

 

Agnes Venema is currently a PhD Candidate at the “Mihai Viteazul” National Intelligence Academy researching Image Analysis and Visual Identification of People in Crowded Spaces as part of the ESSENTIAL Project. Follow her on Twitter @gnesvenema or contact her at agnes.venema@essentialresearch.eu.

 

A previous version of this article was edited for typos.

Agnes Venema

Agnes Venema is a Marie Curie Early Stage Researcher on the European Joint Doctorate grant “Evolving Security SciencE through Networked Technologies, Information policy And Law” (ESSENTIAL). Her research area is Intelligence and National Security, and specifically focuses on the identification of people in crowded spaces. Her host institution is the “Mihai Viteazul” National Intelligence Academy in Bucharest, Romania. Prior to moving to Romania, Agnes Venema worked on various issues related to international security, including at the University of Essex and for various NGOs. She also worked for on Security Sector Reform at the United Nations Mission in Timor-Leste (UNMIT) and participated in election monitoring for the Organization for Security and Cooperation in Europe (OSCE). Agnes Venema holds an LL.M. degree in Human Rights and Criminal Justice from the Irish Centre for Human Rights at NUI Galway and a Bachelor degree from University College Utrecht, where she opted for a semester abroad at the Masaryk University in Brno, Czech Republic.

Related Posts