Advertisement
Advertisement
Facial recognition
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more

Facebook and Huawei are the latest companies trying to fool facial recognition

Researchers from Facebook, Huawei and several universities show how software, stickers and clothing can confuse artificial intelligence

This article originally appeared on ABACUS
Facial recognition technology is already ubiquitous in China, where it’s used for everything from making payments to catching jaywalkers. Avoiding the cameras might be impossible these days, but it is possible to prevent them from recognizing you.

That’s what researchers around the world are working on, and Facebook is just the latest to develop a “de-identification” system.

The system developed by the company’s researchers can supposedly alter key facial features in videos, even working in real time for live streams, in order to “decorrelate the identity,” according to a paper released by Facebook

This is an improvement over existing de-identification methods, according to the paper, because they often only work on still images or swap faces others from a database, which could be abused by creating misleading videos. Facebook’s new method generates completely new faces.

These newly generated faces are similar to the existing face and preserve the original expression, pose and skin tone. The paper calls the method “benign in nature.” But the company does not plan to add the technology to any Facebook apps at this time, VentureBeat says.

Shortly after Facebook’s research was released, Israeli startup D-ID published a blog post saying that they’ve been operating a similar video anonymization system successfully for two years. But The D-ID system doesn’t modify facial features in real time like Facebook’s. Instead, the software completely replaces real faces with computer-generated faces of nonexistent people.
Concerns about privacy have been growing as facial recognition systems become a bigger part of people’s lives. Cities around the world are putting up facial recognition cameras for surveillance, and popular consumer apps like Facebook and Douyin, the Chinese version of TikTok, have been criticized for using facial recognition features that scan users’ pictures and videos.

TikTok, the viral short video sensation, has its roots in China

Facebook enabled facial recognition in photos for users to find out when they appear in other people’s photos, even when untagged. And Douyin tested an image search tool that lets you scan part of a short video to find out who a person is or what product is shown.
Digital alteration isn’t the only way to fool these AI systems. Other researchers have been working on ways to confuse facial recognition systems using adversarial attacks -- modified data inputs intended to cause AI to misclassify a sample. In one example, adding a small amount of noise (which has to be carefully constructed, as noted in this article) to a picture of a panda resulted in some AI networks classifying a panda as a gibbon.
People are now taking this concept and experimenting with adversarial attacks in the physical world. In August, researchers at Huawei’s Moscow research center showed in a paper how attaching a sticker with specific patterns on a person’s hat could hide them from ArcFace, an advanced facial recognition system.

The Huawei researchers say that the model is easily reproducible and works in different lighting conditions.

After placing a sticker on the hat, the facial recognition system did not recognize the person as a person. (Picture: Petr Ivanov/YouTube)
Last year, researchers from Fudan University of China, Chinese University of Hong Kong, Indiana University Bloomington and Alibaba designed a baseball cap that can also obfuscate a person’s face. It works by projecting infrared dots to “strategic spots” on the wearer’s face, which can subtly alter facial features as captured by a facial recognition system. Since they’re infrared, the dots are invisible to the human eye, but they’re picked up by most surveillance cameras and smartphone cameras.

(Abacus is a unit of the South China Morning Post, which is owned by Alibaba.)

Similar adversarial attacks have been used to trick object detectors. In April, researchers from the University of KU Leuven in Belgium showed how holding a printed pattern in front of them could successfully trick an AI system into thinking that they aren’t human.

Some researchers have also printed similar adversarial patterns on T-shirts.

But all these successful experiments don’t mean a single piece of clothing will shield you from all surveillance systems in the world. Many adversarial attacks are designed specifically for one AI system. 

So the most effective technique for protecting your privacy against AI surveillance right now is still probably just hiding your face -- like using these “anti-surveillance” coats or applying anti-surveillance makeup. Although these solutions might also make you more conspicuous to your fellow humans.

For more insights into China tech, sign up for our tech newsletters, subscribe to our Inside China Tech podcast, and download the comprehensive 2019 China Internet Report. Also roam China Tech City, an award-winning interactive digital map at our sister site Abacus.

Post