Umbrellas are practical when it comes to avoid automated face recognition from CCTV et cetera, since they are everyday items and can’t be effectively banned by authorities.
Icons8 product designer Konstantin Zhabinskiy worked on a project of generating 100k faces (using GANs) from a total of 29.000 photographs that they photographed in-house. This has the advantage of consistent lightening and being able to photograph different angles of the same face.
For the time being they have open sourced a large data-set hoping for traktion. It can be used for avatar images and such – so if you ever wanted to pretend you look like a model, no wrinkles, perfect lightening, symmetric eyes and such, only a few GAN-glitches, go ahead and use them for your account.
“incognito” is an anti-recognition jewelry mask by design studio NOMA, Warsaw https://noma-studio.pl/en/incognito/ it reverses the nose-eye relation and that’s what we like about it. Once could definitly go out on street with this.
This creation by London based designer Richard Quinn gets you fully covered. It got some traktion, since Cardi B appeared at Paris Fashion Week in one of his body and face covers. Maybe a motorcycle helmet would be still obfuscating enough, but would you want to wear it on fashion week?
anti-recognition mask by designer collective NOMA, Warsaw, https://noma-studio.pl
»Surveillance Detection Scout is a hardware and software stack that makes use of your Tesla’s cameras to tell you if you’re being followed in real-time. The name, as you likely gathered, pays homage to the ever-effective Surveillance Detection Route. When parked, Scout makes an excellent static surveillance practitioner as well, allowing you to run queries and establish patterns-of-life on detected persons.«
Researcher Truman Kain therefore uses Facenet Image Recognition training data and Plugs into Teslas public API. For License Plate Recognition he uses ALPR. To save the imagery created by the three Tesla front cameras, he uses a software called Tesla USB.
Wired-Author Andy Greenberg notes:
»Kain, a consultant for the security firm Tevora, also isn’t oblivious to his creation’s creep factor. He says the Surveillance Detection Scout demonstrates the kind of surveillance the data that self-driving cars already collect could enable.«
To adversarial.io this presents a use-case where you want to have adversarial patches on license plates (if that is not forbidden by law, because it presents some kind of obfuscation) and of course wear an adversarial t‑shirt of some kind… This case also reminds me of the speculation, that UBER at some point might make their cars more profitable, by using them as data collection drones.
In an unprecedented move the city of Francisco, has decided that new face recognition projects by the city itself has to be run through their board of supervisors. See the draft law here: https://sfgov.legistar.com/View.ashx?M=F&ID=7206781&GUID=38D37061-4D87-4A94-9AB3-CB113656159A
That means, it doesn’t completetly ban face recognition as some media suggested, but develops a policy that will put the acquisition of face recognition technique through the city administration under control.
While surveillance technology may threaten the privacy of all of us, surveillance efforts have historically been used to intimidate and oppress certain communities and groups more than others, including those that are defined by a common race, ethnicity, religion, national origin, income level, sexual orientation, or political perspective.FILE NO. 190110, Board of supervisors, City of San Francisco, p.1
Obviously the city has not banned face recognition technology in general, since this would include every smart phone today. Also there is a long list of exemptions:
Surveillance Technology does not include the following devices, hardware, or software: [long list of basic electronic infrastructure, incl. databases needed to run a city].FILE NO. 190110, Board of supervisors, City of San Francisco, p.6
Direct PDF Download: weneedtotalkai.files.wordpress.com/2019/06/weneedtotalkai_cc.pdf … by Julia Schneider and Lena Kadriye Ziyal gives a great overview on an entry level for those who are less technically inclined, yet still wonder, what is behind the hype.
Unfortunately what we see as a feature (aka adversarial noise), they see as a bug. But hey, this may change.
Simen Thys, Wiebe Van Ranst, Toon Goedemé from KU Leuven/Belgium researched adversarial patches for moving images and came up with several patterns that disturb detection.
Their attack is directed against a specifc library for moving image recognition, called YOLOv2, https://pjreddie.com/darknet/yolov2/
Full paper at: https://arxiv.org/pdf/1904.08653.pdf
Mauro Martino and Luca Stornaiuolo (MIT-IBM Watson AI Lab) have experimented with GANs to generate portraits of individuals. Basically you upload your own photo and the AI compares its features to the set of images from which it was trained from (faces of actors and actresses ) and then generates a new portrait.
It sounds like an interesting experiement, but already early on we note that this »faces of actors and actresses« dataset is going to be biased in one or the other way: towards race, towards gender or towards certain beauty features which are most common among actors/actresses.
The aim of this project however is not clear, even when the authors add some pseudo-critical comments to it:
The result is an image that examines the concept of identity, pushing the boundaries between the individual that recognizes herself/himself and the collection of faces from the society of spectacle that are sedimented in the neural network.Martino/Stornaiuolo
So, the question remains: what is won through this project?
»MegaPixels is an independent art and research project by Adam Harvey and Jules LaPlace that investigates the ethics, origins, and individual privacy implications of face recognition image datasets and their role in the expansion of biometric surveillance technologies.«
It’s worth to visit this site because it introduces, which data sets Machine Learning relies on, and it also raises the question how researchers in this field can be called out for acting ethically.
A short while later, Adam Harvey posted this on twitter, a good example of how educating the public can shakeup bad ethics:
»Pedro Domingos, a professor of computer science at the University of Washington, said that artificial intelligence is much less sophisticated than many people believe, and Silicon Valley companies often portray their systems as more powerful than they actually are as they compete for business. In fact, even the most advanced artificial intelligence systems still are fooled in ways that a human would easily detect.«
Youtuber UnboxTherapy unlocks his phones’ face recognition with another phone showing his face.
»Because anyone can contribute to its platform, it gets updated every day.« says the CEO. Nothing really new from an AI startup, despite making headlines with MITs technology review: The company Mapillary crowdsources common knowledge to capitalize it by converting it to valuable data that is then circulated out of the hand of the commons, where it was originally situated.
Mapillary uses crowd sourced imagery (that is without paying for it) to create additional data that would help autonomous cars to drive »more savely«. While MIT Technology Review tries to describe the company as »Wikipedia of mapping« it is clearly not. The company is privately owned and doesn’t give away the data in the sense of a public knowledge (e.g. donating it to open street maps). Parts of the data is accessable via an API though and temporarily free »for charities and for educational or personal use«.
The rather impudent marketing is acknowledged at the articles end, when stating: »This story was corrected to make clear the images are crowdsourced but the underlying code is not open source.«
Why does adversarial.io tackle this? The answer might be in an text by Eykholt et al.; Robust Physical-World Attacks on Deep Learning Models. https://arxiv.org/abs/1707.08945
Press Relaease »SRI’s Spotting Audio-Visual Inconsistencies (SAVI) techniques detect tampered videos by identifying discrepancies between the audio and visual tracks. For example, the system can detect when lip synchronization is a little off or if there is an unexplained visual “jerk” in the video. Or it can flag a video as possibly tampered if the visual scene is outdoors, but analysis of the reverberation properties of the audio track indicates the recording was done in a small room.
This video shows how the SAVI system detects speaker inconsistencies. First, the system detects the person’s face, tracks it throughout the video clip, and verifies it is the same person for the entire clip. It then detects when she is likely to be speaking by tracking when she is moving her mouth appropriately. «
newer posts | older posts