Clearview AI seeking to put 100 billion photos in facial recognition database
Facial recognition firm Clearview AI says it will soon have 100 BILLION photos in its database to ensure ‘almost everyone in the world will be identifiable’ and wants to expand beyond law enforcement
Clearview AI said in December it aims to put almost every human’s face in its facial recognition database, making ‘everyone in the world will be identifiable’In a report to investors made in December, the facial recognition firm told investors the company is currently collecting 100 billion photos of human faces The company further told investors that its ‘index of faces’ has grown from 3 billion images to more than 10 billion since the start of 2020The images – approximately 14 photos for each of the 7 billion people on Earth – would bolster the firm’s already extensive surveillance systemIn the presentation to investors, obtained by The Washington Post, Clearview brass pleaded for funding for the undertaking, to the tune of $50 million
<!–
<!–
<!–<!–
<!–
(function (src, d, tag){
var s = d.createElement(tag), prev = d.getElementsByTagName(tag)[0];
s.src = src;
prev.parentNode.insertBefore(s, prev);
}(“https://www.dailymail.co.uk/static/gunther/1.17.0/async_bundle–.js”, document, “script”));
<!–
DM.loadCSS(“https://www.dailymail.co.uk/static/gunther/gunther-2159/video_bundle–.css”);
<!–
Clearview AI has announced it aims to put almost every human’s face in its facial recognition database, making ‘almost everyone in the world will be identifiable’
A controversial AI company has announced it aims to put almost every human’s face in its facial recognition database, making it so that ‘almost everyone in the world will be identifiable.’
In its latest report in December, facial recognition firm Clearview AI told investors that the company is currently collecting 100 billion photos of human faces for the unprecedented campaign, which will be stored in its dedicated database.
The collection of images – approximately 14 photos for each of the 7 billion people on the entire planet, scraped from social media and other sources – would extensively bolster the company’s extensive surveillance system, already the most elaborate of its kind.
The firm’s technology has already been used by myriad law enforcement and government agencies around the world, helping lawmen make thousands of arrests by aiding in various criminal investigations.
Clearview currently sports a database of more than three billion images scraped from sources like Facebook, YouTube, Venmo and millions of other sites, according to the company.
Now, however, the company, headed by its young Australian CEO Hoan Ton-That, 34, and currently valued at more than $100 million, is seeking to expand its facial recognition empire beyond law enforcement.
In the presentation to investors last year, obtained by The Washington Post, Clearview brass pleaded for funding for the undertaking, to the tune of $50 million.
In its latest report in December, facial recognition firm Clearview AI told investors that the company is currently collecting 100 billion photos of human faces for the unprecedented campaign, which will be stored in its dedicated database.
The company further told investors that its ‘index of faces’ has grown from 3 billion images to more than 10 billion since the start of 2020.
With the $50 million, the company said, it would be able to reach its goal of 100 billion photos, while also building new products, expanding its international sales team, and increasing pay to lobbying government policymakers to ‘develop favorable regulation,’ The Post reported.
The collection of images – approximately 14 photos for each of the 7 billion people on the entire planet, scraped from from social media and other sources – would extensively bolster the company’s extensive surveillance system, already the most elaborate of its kind
In the presentation to investors last year, Clearview brass pleaded for funding for the undertaking, to the tune of $50 million. The company was sued by the American Civil Liberties Union in March 2020, contending it illegally stockpiled images of 3 billion people scraped from internet sites without their knowledge or permission
At the time of the presentation, its data collection system was ingesting 1.5 billion images a month, the company said.
Clearview added that the improved database would help organizations using their tech better monitor ‘gig economy’ workers, and that it is currently researching a number of new technologies could identify someone based on how they walk, detect their location from a photo, or even scan subjects’ fingerprints from afar.
In March 2020, Clearview was sued by the American Civil Liberties Union, who contended the company illegally stockpiled images of three billion people scraped from internet sites without their knowledge or permission.
Some now predict Clearview — a tiny set-up which has also licensed its software to a string of private companies for supposed security purposes — could end up destroying privacy as we know it by exploiting the vast size and access to social media [File photo]
For many, news of that stockpile raised concerns that the type of surveillance seen in China could happen in the US and other countries.
In December, Tech Times reported that Clearview had been called out by multiple privacy watchdogs in countries across the globe for alleged privacy violations.
European nations like the United Kingdom, France, Italy, Greece, and Austria have all expressed disapproval of Clearview’s method of extracting information from public websites, saying it comes in violation of European privacy policies.
In Canada, provinces such as Quebec, Alberta, and British Columbia have requested the company take down the images obtained without subjects’ permission.
Various law enforcement agencies have also expressed their concern regarding Clearview’s collection of persons’ personal information, with the NYPD turning down a partnership with Clearview in April after undergoing a 90-day free-trial of their facial recognition software.
Police officers say that Clearview offers several advantages over other facial recognition tools. For one, its database of faces is so much larger. Also, its algorithm doesn’t require people to be looking straight at the camera , it can even identify a partial view of a face — under a hat or behind large sunglasses [File photo]
The department decided against using the app, citing potential security risks and potential for abuse, sources said.
At least seven states and nearly two dozen cities have limited government use of Clearview’s technology amid fears over civil rights violations, racial bias and invasion of privacy.
Social media sites including Facebook and Twitter urged the company to delete the photos that it has collected, to which CEO Ton-That said the company, founded in 2016, collects only publicly available photos from the open internet that are accessible ‘from any computer anywhere in the world’.
He asserted that its database cannot be used for surveillance.
The ACLU filed the case in Illinois in May 2020, with the backing of a consortium of Chicago-based rights groups.
After the suit was filed, authorities said Clearview had halted its selling of its facial recognition technology to US-based private firms.
Illinois was the first state in the U.S. to regulate the collection of biometric data, with the introduction in 2008 of the Biometric Privacy Act (BIPA).
BIPA requires companies that collect, capture, or obtain an Illinois resident’s biometric identifier — such as a fingerprint, faceprint, or iris scan — to first notify that individual and obtain their written consent.
Clearview AI, founded in 2016 as a facial recognition firm, is currently collection 1.5 billion images of people a month, the company said during the December report
ACLU said that their lawsuit was ‘the first to force any face recognition surveillance company to answer directly to groups representing survivors of domestic violence and sexual assault, undocumented immigrants, and other vulnerable communities uniquely harmed by face recognition surveillance.’
In the court documents, filed in Cook County, Illinois, on Thursday, the ACLU team claim that the facial recognition technology provided by Clearview puts vulnerable people at risk.
‘Given the immutability of our biometric information and the difficulty of completely hiding our faces in public, face recognition poses severe risks to our security and privacy,’ they claim.
‘The capture and storage of faceprints leaves people vulnerable to data breaches and identity theft.
‘It can also lead to unwanted tracking and invasive surveillance by making it possible to instantaneously identify everyone at a protest or political rally, a house of worship, a domestic violence shelter, an Alcoholics Anonymous meeting, and more.
‘And, because the common link is an individual’s face, a faceprint can also be used to aggregate countless additional facts about them, gathered from social media and professional profiles, photos posted by others, and government IDs.’
Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project, described Clearview’s technology as ‘menacing’.
He said it could be used to track people at political rallies, protests, and religious gatherings, among other uses.
The coalition are asking a judge to order Clearview to delete the images, and inform in writing and obtain written consent from ‘all persons’ before capturing their biometric identifiers.
Tor Ekeland, an attorney for the company, described the law suit as ‘absurd’ and a violation of the First Amendment, which protects freedom of speech, religion, assembly and protest.
‘Clearview AI is a search engine that uses only publicly available images accessible on the internet,’ he said.
Clearview AI CEO Hoan Ton-That has said his company collects only publicly available photos from the open internet that are accessible ‘from any computer anywhere in the world.’ He said its database cannot be used for surveillance
‘It is absurd that the ACLU wants to censor which search engines people can use to access public information on the internet. The First Amendment forbids this.’
Clearview AI was founded in 2016 by Hoan Ton-That, a 31-year-old Australian tech entrepreneur and one-time model.
Ton-That co-founded the company with Richard Schwartz, an aide to Rudy Giuliani when he was mayor of New York.
It is backed financially by Peter Thiel, a venture capitalist who co-founded PayPal and was an early investor in Facebook.
Ton-That describes his company as ‘creating the next generation of image search technology’, and in January the New York Times reported that Clearview AI had assembled a database of three million images of Americans, culled from social media sites.
The paper published an expose of the company, in which Ton-That described how he had come up with a ‘state-of-the-art neural net’ to convert all the images into mathematical formulas, or vectors, based on facial geometry – taking measurements such as how far apart a person’s eyes are.
Clearview created a directory of the images, so that when a user uploads a photo of a face into Clearview’s system, it converts the face into a vector.
The app then shows all the scraped photos stored in that vector’s ‘neighborhood’, along with the links to the sites from which those images came.
Amid the backlash from the January article, Clearview insisted that it had created a valuable policing tool, which they said was not available to the public.
‘Clearview exists to help law enforcement agencies solve the toughest cases, and our technology comes with strict guidelines and safeguards to ensure investigators use it for its intended purpose only,’ the company said.
Clearview insisted the app had ‘built-in safeguards to ensure these trained professionals only use it for its intended purpose’.
However, in February BuzzFeed reported that Clearview’s technology was being used by private companies including Macy’s, Walmart, BestBuy and the NBA, and even a sovereign wealth fund in the United Arab Emirates.
The New Jersey attorney general has banned state law enforcement from using Clearview’s system, and in 2020 the Vermont attorney general sued.