What the hell is ‘Dazzle’ makeup? And how is it being used to help reclaim our right to privacy?
All too often topics like make-up and beauty, like any typically ‘feminine’ pursuits, are swiftly dismissed as frivolous and self-indulgent. Caring about your appearance is considered little more than vanity and make-up is still primarily seen as a tool for making yourself more attractive.
But in a world where we’re being increasingly surveyed and the use of facial recognition technology continues to spread, our faces are becoming a battleground in a fight to protect our identity, and make-up is primed as the weapon-of-choice for preserving our privacy.
“Computer Vision Dazzle” – or “CV Dazzle” – make-up, is a concept created by the artist Adam Harvey back in 2010 for his NYU Master’s thesis. His growing concerns about mass surveillance and data collection led him to explore how fashion could be used in an effort to evade face-detection technology. The end result was a colourful and abstract styling of hair and make-up that works as a camouflage against the facial recognition algorithms.
The idea takes inspiration from the ‘Dazzle’ naval camouflage used in the First World War. Rather than using camouflage trying to make ships blend into their background, this technique made use of bold geometric designs which disrupts the visual continuity of a battleship, obscuring its orientation and size.
As Harvey explains on the project’s website: “CV Dazzle uses avant-garde hairstyling and make-up designs to break apart the continuity of a face.
“Since facial-recognition algorithms rely on the identification and spatial relationship of key facial features, like symmetry and tonal contours, one can block detection by creating an ‘anti-face’.”
The techniques for creating this ‘anti-face’ include creating asymmetry; concealing parts of the face – such as the eyes or nose bridge – with hair; using colourful make-up that creates unusual contrasts of light and dark on the face. This makes it harder for face-recognition AI to find the patterns they’re trained to look for, meaning that if it sweeps over your face in Dazzle makeup, it’ll go undetected. The striking paradox of CV Dazzle is that by attempting to make us invisible to machines, we become even more visible and memorable to people.
It’s certainly a bold look to try and pull off, and it might be hard to envisage how this style could be applied practically in the real world. In 2011 in an interview with The Art Blog, Harvey says he imagined the style being used by party goers as a way to make sure that any pictures taken of them while they partying don’t end up auto-tagged and picked up by search engines if they were put online. But in the past decade we haven’t seen the idea spread much beyond the realm of an experimental art project and few other people have taken up the aesthetics of Dazzle.
That was until now.
A London based group called The Dazzle Club launched in August 2019 and has started doing regular demonstrations in full Dazzle makeup. They were founded as a response to the rapidly growing presence of surveillance technology around the city, such as its use by the King’s Cross Estate and the London Metropolitan Police.
Prior to the Covid-19 outbreak, The Dazzle Club held demonstrations and marches in person, but now due to social distancing rules they’ve been holding their sessions online via Zoom. In their online sessions they offer Dazzle make-up tutorials and raise awareness about the dangers the mass surveillance poses to our rights and liberties. Although we’re not encountering CCTV cameras as frequently nowadays in life under lockdown, it doesn’t mean we’re not still at risk of having our privacy invaded or of being under surveillance.
While we’re stuck at home our connection to the outside world has never been more reliant on technology and particularly social media. And those social media networks like Facebook and YouTube – two of our main sources of distraction at the moment – are the source of millions of images that are used to help build controversial face-detection software like Clearview AI.
Clearview AI received heavy criticism after a New York Times investigation reported on their scraping of ‘public’ data without consent and how they continue to store collected images even after users delete them from their social media accounts. Although the company claims their technology exists to assist law enforcement in North America, a data breach back in February revealed that they’ve also been pursuing clients across the globe in retail, banking and gaming. More worryingly, according to an investigation by HuffPost, Clearview AI also has longstanding links with the alt-right and neo-Nazi groups.
The widespread use of face-recognition technology raises many alarming ethical questions: Are we willing to forfeit our right to privacy for the sake of greater ‘security’? Do we trust those in power to make that choice for us? Even if you answer those questions with a “Yes”, there are the issues of accuracy and misidentification.
Studies have shown that these AI’s do a worse job of correctly identifying people with darker complexions and women. Similarly, the datasets used to train face-recognition software are rarely neutral and the way information is categorised into different groups can be influenced by human biases. This, along with the overall lack of regulation of how face-detection technology is implemented, means there’s a real danger of AI being manipulated to target certain groups.
We’re already seeing how face-detection software has played a significant role in the government’s mass monitoring and detention of the country’s Muslim Uighur population. Yet in spite of concerns, the spread of this technology doesn’t seem to be slowing down any time soon. For facial-recognition companies, the Coronavirus pandemic presents an opportunity to expand their business, and some technologies are being adapted into track and trace systems that are being used to limit the spread of Covid-19.
As these technologies become evermore present in our lives and more of us feel a desire to opt out of having our faces constantly tracked, perhaps we’ll see Dazzle make-up becoming more mainstream. It’s important to note that these algorithms are not static. As they become more pervasive, they will continually evolve to overcome new challenges. In China, some face-detection AI is now sophisticated enough to identify individuals wearing masks.
Adam Harvey points out on the CV Dazzle website that: “CV Dazzle is neither a product nor a pattern. CV Dazzle is a concept and a strategy.” The techniques for evading AI identification will need to be revised and adapted to remain effective against progressively more complex algorithms. Dazzle make-up alone cannot provide a definitive solution to the dangers of authoritarian mass surveillance. But it offers a small act of resistance against the eroding of our personal privacy and highlights the value of reclaiming possession over our own identities.