In response to rising worries about the technology and its misuse by governments, police, and others, Facebook said that it will shut down its face-recognition system and wipe the faceprints of more than 1 billion individuals.
In a blog post on Tuesday, Jerome Pesenti, vice president of artificial intelligence at Facebook’s new parent company, Meta, said, “This transition will constitute one of the major shifts in facial recognition usage in the technology’s history.”
He said the corporation was weighing the technology’s good applications “against mounting social concerns,” especially because authorities have yet to issue clear guidelines. ” “More than a billion people’s individual face recognition templates” would be deleted in the next weeks, he added.
The about-face by Facebook comes after a hectic few weeks. It unveiled its new name, Meta, for the corporation, but not the social network, on Thursday. The move, it claims, would allow it to focus on developing technology for the “metaverse,” which it views as the next incarnation of the internet.
The corporation is also dealing with what may be its worst public relations crisis to date, after records disclosed by whistleblower Frances Haugen revealed that it was aware of the problems caused by its products but did little or nothing to ameliorate them.
More than a third of Facebook’s daily active users have agreed to have their faces recognized by the platform. This equates to around 640 million individuals. Face recognition was first deployed by Facebook more than a decade ago, but as the company faced criticism from courts and authorities, it progressively made it simpler to opt out of the tool.
In 2019, Facebook ceased automatically detecting individuals in photographs and suggested that they be “tagged,” and instead allowed users to choose whether or not they wanted to utilize its facial recognition tool.
According to Kristen Martin, a professor of technology ethics at the University of Notre Dame, Facebook’s choice to shut down its system “is a solid example of attempting to make product decisions that are beneficial for the customer and the corporation.” She went on to say that the change underscores the power of public and regulatory pressure, given that the facial recognition technology has been criticized for more than a decade.
Facebook’s parent firm, Meta Platforms Inc., appears to be exploring new methods of identifying people. According to Pesenti, the statement on Tuesday is part of a “company-wide shift away from broad identification and toward specific kinds of personal authentication.”
“When the technology functions discreetly on a person’s own devices,” he added, “facial recognition can be very helpful.” “Today, the systems used to unlock smartphones most typically employ this type of on-device facial recognition, which requires no transmission of face data with an external server.”
Face ID, Apple’s technique for unlocking iPhones, is powered by this technology.
Researchers and privacy advocates have questioned the internet industry’s use of face-scanning software for years, citing studies that revealed it operated unevenly across racial, gender, and age lines. One issue is that the technology might misidentify those with darker complexion.
Another issue with face recognition is that it requires companies to create unique faceprints of large numbers of people – often without their consent and in ways that can be used to fuel tracking systems, according to Nathan Wessler of the American Civil Liberties Union, which has fought Facebook and other companies over their use of the technology.
“This is a huge step forward in acknowledging that this technology is fundamentally harmful,” he added.
Last year, Facebook found itself on the losing end of the dispute when it asked that facial recognition firm ClearviewAI, which collaborates with law enforcement, cease mining Facebook and Instagram user photographs in order to identify the persons in them.
Concerns have also increased as more people become aware of the Chinese government’s vast video monitoring system, which has been deployed in an area with a strong Muslim ethnic minority population.
Facebook’s massive database of photographs posted by users has aided in the advancement of computer vision, a field of artificial intelligence. Many of those research teams have now been redirected to Meta’s augmented reality goals, in which the business anticipates future consumers donning spectacles to enjoy a combination of virtual and actual worlds. As a result of these technologies, there may be additional issues regarding how biometric data is acquired and tracked.
When asked how consumers could verify that their picture data was erased and what the business will do with its underlying face-recognition technology, Facebook gave vague replies.
On the first issue, business spokesperson Jason Grosse wrote in an email that if users’ face-recognition settings are turned on, their templates would be “tagged for deletion,” and that the deletion procedure will be finished and validated in the “coming weeks.” On the second point, Facebook will “switch off” components of the system related with the face-recognition settings, according to Grosse.
Other U.S. tech companies such as Amazon, Microsoft, and IBM decided last year to cease or pause their sales of facial recognition software to police, citing worries about false identifications and amid a larger awakening in the United States about policing and racial inequality.
Concerns about civil rights abuses, racial prejudice, and invasion of privacy have led at least seven states and almost two dozen localities in the United States to restrict government use of the technology.
In October, President Joe Biden’s science and technology office launched a fact-finding expedition to investigate face recognition and other biometric capabilities used to identify people or assess their emotional, mental, or moral states. European legislators and authorities have also made efforts to prevent police enforcement from scanning people’s faces in public places.
Face-scanning methods contributed to Facebook’s $5 billion punishment and privacy limits imposed by the Federal Trade Commission in 2019. Facebook agreed to a deal with the FTC that includes a requirement to provide “clear and prominent” notice before using face recognition technology on people’s images and videos.
In addition, the corporation agreed to pay $650 million earlier this year to resolve a 2015 lawsuit alleging that it violated an Illinois privacy statute by using photo-tagging without consumers’ authorization.
“It’s a significant issue, it’s a big movement,” said John Davisson, senior counsel at the Electronic Privacy Information Center. “But it’s also far, far too late.” EPIC filed its first complaint with the FTC in 2011, a year after Facebook’s face recognition feature was launched.