Every day, we see reminders of how our privacy rights are being challenged. Recently, one stood out: Clearview AI, a small U.S.-based company enshrouded in secrecy, has created a facial recognition tool by scraping photos and other data from multiple websites including social networks such as Facebook, YouTube, Twitter, Instagram, and even Venmo. It now has a database of more than 3 billion images.
It is definitely troubling how these photos are being used. Apparently, Clearview has created a facial recognition algorithm that focuses on the biological characteristics of a face and converts them into formulas based on facial geometry to capture unique data such as how far apart a person’s eyes are and how high a person’s forehead is. These biometric signatures or “faceprints” are gathered without the individual’s consent. Then, when a new photo of an unknown person is uploaded, the tool uses these biometric identifiers to match the face in question against photos of faces in its directory with similar vectors. Even more disturbing is that other personal data such as home addresses and places the person frequents can be easily linked to these pictures.
In addition, Clearview also has a smartphone app that allows its customers to upload a photo of an unknown person and receive a set of matching photos. So someone can instantaneously figure out the identity of a stranger walking down the street, sitting on a subway, or hanging out in a coffee shop.
As a result of some recent public outcry, tech companies such as LinkedIn, Twitter, Facebook, Venmo, Google, and YouTube have sent cease and desist orders to Clearview but it is unclear that it has ever complied with those orders. Even Apple has suspended Clearview’s access to its developer account for violating its terms. Clearview’s CEO Hoan Ton-That claims that it has the First Amendment right to public information. However, while the photos themselves are public, privacy attorneys agree that the biometric signatures extracted from them are not.
Why Should You Care About This?
We are very concerned, and you should be too. There are at least three reasons for concern. This technology can result in: 1) weaponization, and 2) harmful discrimination in the law enforcement context, 3) the potential end of privacy as we know it through mass surveillance.
The biggest concern is that there is no data to suggest that this tool is accurate. Clearview claims it can have a match rate of 98.6% for every 1 million faces, and 75% for its overall database. Even if these numbers are true, they are not good enough. Even a tiny margin of error can drastically impact people’s lives — as you can see below:
Clearview claims that its technology can help track dangerous people such as child molesters, murderers, suspected terrorists, and child victims of exploitative videos posted on the web. It has also been argued that the technology can be used to solve crimes such as shoplifting, identity theft, and credit card fraud.
However, this technology can be used to inflict great harm in the hands of stalkers, abusive ex-partners, and exploitative companies to target vulnerable communities. Foreign governments with terrible human rights records can use it to blackmail innocent people, curtail their political actions, and throw them in jail. It has been observed that this tool can be used for “rampant civil rights violations.” Relevant to recent times, Senator Edward Markey, a Democratic senator from Massachusetts, is worried that this tool can be used to stifle the First Amendment rights of people protesting for racial justice.
More than 200 companies have used the tool including Kohl’s, Walmart, Wells Fargo, and Bank of America. To date, they have collectively performed nearly 500,000 searches, which Clearview has tracked.
Harmful Discrimination in the Law Enforcement Context
Clearview has contracts with more than 600 law enforcement agencies. This is alarming because facial recognition technology has been found to have a racial bias. The National Institute for Standards and Technology determined that Asians, African-Americans, and Native Americans are particularly likely to be misidentified. Other studies conclude that some facial recognition algorithms misclassify black women nearly 35% of the time while nearly always getting it right for white men. Echoing these findings, the federal government released a report finding that the system worked best on middle-aged white men but not as well for people of color, women, children or the elderly.
Furthermore, the manner in which this technology is used may recycle past racial biases of the system. In many jurisdictions, police use this facial recognition tool against mugshot databases, which contain more photos of black people than white people because black people are arrested at higher rates than white people. These concerns have prompted companies such as Microsoft, IBM and Amazon to stop selling facial recognition technology to law enforcement even if only temporarily.
Mass Surveillance and the End of Privacy
Without strong laws in place to curtail companies such as Clearview, privacy as we know it may be over. The ACLU observes that this technology will destroy our rights to anonymity and privacy and the concomitant safety and security that both bring. This technology gives its user the unprecedented power to spy and gather personal details about anyone — from knowing who their friends are, where they live, where and what they eat, to places they go such as political rallies, places of worship and AA meetings.
Clearview recently said that it would stop selling its technology to private companies and only focus on law enforcement, but numerous media reports say that is not true. As noted above, more than 200 companies have used the tool — many using the free trial without a contract. Clearview has also claimed that its only customers are in the United States and Canada, but reports show that it has customers in Saudi Arabia, the United Arab Emirates, and other countries around the world. The fact that this could be global phenomenon where privacy no longer exists no matter where you travel is a daunting and scary concept.
The Solution: Law
Experts agree that a law is necessary to curtail or even outright ban the use of facial recognition technology. Senator Cory Booker tweeted that “if Congress doesn’t act to limit the use of technology in this manner, we risk losing our privacy and our freedom.” According to Woodrow Hartzog, professor of law and computer science at Northeastern University: “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.” Al Gidani, privacy professor at Stanford Law School, puts it simply and succinctly: “[a]bsent very strong federal privacy law, we’re all screwed.”
One way to combat this severe encroachment on privacy is to enact laws like the Illinois Biometric Information Privacy Act which has already pushed Facebook into a $550 million settlement over its use of unauthorized facial recognition on photos uploaded to its website. At least one other state, Texas, has a biometric privacy law, but it requires the Attorney General to take action.
Several lawsuits have been filed against Clearview, including in states such as Virginia, Illinois, and California. In January, New Jersey enacted a statewide ban on law enforcement using Clearview while it looks into the software. Vermont’s Attorney General filed suit against Clearview for data privacy violations.
In states such as California that provide consumer data privacy rights, individuals can request that Clearview delete their photos and data. Clearview has stated that it is processing those requests, but it requires a photo of the requesting individual on a government issued id.
We, at the Data Dividend Project, are working hard to not only get you paid for your data, but to provide you with tools to remove yourself from invasive and potentially harmful programs like these. We are vigilantly searching for these types of products and will alert you as we find them.
Clearview is just one example of why we need to assert our data privacy rights. This isn’t just about money: it’s about our freedom. If we don’t do something, the dystopian future we fear could become a reality. The good news is that the Data Dividend Project is actively fighting for your future freedom — and we need your help. Join now at https://datadividendproject.com