What if someone was able to obtain your personal information, like your name and address, just by taking a picture of you? As creepy as it sounds, it may soon become a reality.
According to a recent New York Times exposé, a relatively unknown startup company named Clearview AI — which the Times described as “secretive company that might end privacy as we know it” — has developed a facial recognition app that will make it easier for anyone to find you online.
Clearview developed its facial recognition technology by scraping through millions of sites like Facebook, Youtube, and Venmo. So if any of your social media profiles are public, you’re already in the system.
Then, once someone takes a picture of you and uploads it to the app, Clearview scours its more than 3 billion picture database to find you.
The Times even found that a person wouldn’t even need their phone to take a picture, because the app’s code is compatible with augmented reality glasses, which means users “would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.”
The app, is currently being used by more than 600 law enforcement agencies, including the FBI and the Department of Homeland Security, which is frightening on its own. Mainly because it’s been found that facial recognition technology tends to produce a large number of false positives. Clare Garvie, a researcher at Georgetown University’s Center on Privacy and Technology, told the Times,
“The larger the database, the larger the risk of misidentification because of the doppelganger effect. They’re talking about a massive database of random people they’ve found on the internet.”
Misidentification becomes even more frequent when it comes to people of color since many facial recognition systems have a harder time analyzing darker skin.
In 2018, the National Institute of Standards and Technology found that Idemia, a French facial recognition company, falsely matched black women 10 times more often than they did for white women. The report stated that “black females are the demographic that usually gives the highest FMR,” or false match rate.
But what makes things even more unsettling is that even though the company isn’t available for public use, some police officers and investors in the company feel that the app will eventually be available to the public. And according to Eric Goldman, the co-director of the High Tech Law Institute at Santa Clara University, making the app public could have dangerous consequences. He explained,
“The weaponization possibilities of this are endless. Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”
Women, minorities, and members of the LGBTQ community already have to worry about being attacked, sexually assaulted and harassed. Technology like this in the wrong hands will make it easier for stalkers, sexual predators, and violent exes to find and harm their victims.
No matter how many benefits there are with the technology, there will always be people out there who are going to abuse it. It’s not enough to regulate its usage. The only way to stop it is to ban it.
READ THIS NEXT
Japan’s Sexual Harassment Is Out Of Control — This App Is Trying To Stop It