anti-recognition mask by designer collective NOMA, Warsaw, https://noma-studio.pl
»Surveillance Detection Scout is a hardware and software stack that makes use of your Tesla’s cameras to tell you if you’re being followed in real-time. The name, as you likely gathered, pays homage to the ever-effective Surveillance Detection Route. When parked, Scout makes an excellent static surveillance practitioner as well, allowing you to run queries and establish patterns-of-life on detected persons.«
Researcher Truman Kain therefore uses Facenet Image Recognition training data and Plugs into Teslas public API. For License Plate Recognition he uses ALPR. To save the imagery created by the three Tesla front cameras, he uses a software called Tesla USB.
Wired-Author Andy Greenberg notes:
»Kain, a consultant for the security firm Tevora, also isn’t oblivious to his creation’s creep factor. He says the Surveillance Detection Scout demonstrates the kind of surveillance the data that self-driving cars already collect could enable.«
To adversarial.io this presents a use-case where you want to have adversarial patches on license plates (if that is not forbidden by law, because it presents some kind of obfuscation) and of course wear an adversarial t‑shirt of some kind… This case also reminds me of the speculation, that UBER at some point might make their cars more profitable, by using them as data collection drones.
In an unprecedented move the city of Francisco, has decided that new face recognition projects by the city itself has to be run through their board of supervisors. See the draft law here: https://sfgov.legistar.com/View.ashx?M=F&ID=7206781&GUID=38D37061-4D87-4A94-9AB3-CB113656159A
That means, it doesn’t completetly ban face recognition as some media suggested, but develops a policy that will put the acquisition of face recognition technique through the city administration under control.
While surveillance technology may threaten the privacy of all of us, surveillance efforts have historically been used to intimidate and oppress certain communities and groups more than others, including those that are defined by a common race, ethnicity, religion, national origin, income level, sexual orientation, or political perspective.FILE NO. 190110, Board of supervisors, City of San Francisco, p.1
Obviously the city has not banned face recognition technology in general, since this would include every smart phone today. Also there is a long list of exemptions:
Surveillance Technology does not include the following devices, hardware, or software: [long list of basic electronic infrastructure, incl. databases needed to run a city].FILE NO. 190110, Board of supervisors, City of San Francisco, p.6
Direct PDF Download: weneedtotalkai.files.wordpress.com/2019/06/weneedtotalkai_cc.pdf … by Julia Schneider and Lena Kadriye Ziyal gives a great overview on an entry level for those who are less technically inclined, yet still wonder, what is behind the hype.
Unfortunately what we see as a feature (aka adversarial noise), they see as a bug. But hey, this may change.
Simen Thys, Wiebe Van Ranst, Toon Goedemé from KU Leuven/Belgium researched adversarial patches for moving images and came up with several patterns that disturb detection.
Their attack is directed against a specifc library for moving image recognition, called YOLOv2, https://pjreddie.com/darknet/yolov2/
Full paper at: https://arxiv.org/pdf/1904.08653.pdf
Mauro Martino and Luca Stornaiuolo (MIT-IBM Watson AI Lab) have experimented with GANs to generate portraits of individuals. Basically you upload your own photo and the AI compares its features to the set of images from which it was trained from (faces of actors and actresses ) and then generates a new portrait.
It sounds like an interesting experiement, but already early on we note that this »faces of actors and actresses« dataset is going to be biased in one or the other way: towards race, towards gender or towards certain beauty features which are most common among actors/actresses.
The aim of this project however is not clear, even when the authors add some pseudo-critical comments to it:
The result is an image that examines the concept of identity, pushing the boundaries between the individual that recognizes herself/himself and the collection of faces from the society of spectacle that are sedimented in the neural network.Martino/Stornaiuolo
So, the question remains: what is won through this project?
»MegaPixels is an independent art and research project by Adam Harvey and Jules LaPlace that investigates the ethics, origins, and individual privacy implications of face recognition image datasets and their role in the expansion of biometric surveillance technologies.«
It’s worth to visit this site because it introduces, which data sets Machine Learning relies on, and it also raises the question how researchers in this field can be called out for acting ethically.
A short while later, Adam Harvey posted this on twitter, a good example of how educating the public can shakeup bad ethics:
»Pedro Domingos, a professor of computer science at the University of Washington, said that artificial intelligence is much less sophisticated than many people believe, and Silicon Valley companies often portray their systems as more powerful than they actually are as they compete for business. In fact, even the most advanced artificial intelligence systems still are fooled in ways that a human would easily detect.«
Youtuber UnboxTherapy unlocks his phones’ face recognition with another phone showing his face.
»Because anyone can contribute to its platform, it gets updated every day.« says the CEO. Nothing really new from an AI startup, despite making headlines with MITs technology review: The company Mapillary crowdsources common knowledge to capitalize it by converting it to valuable data that is then circulated out of the hand of the commons, where it was originally situated.
Mapillary uses crowd sourced imagery (that is without paying for it) to create additional data that would help autonomous cars to drive »more savely«. While MIT Technology Review tries to describe the company as »Wikipedia of mapping« it is clearly not. The company is privately owned and doesn’t give away the data in the sense of a public knowledge (e.g. donating it to open street maps). Parts of the data is accessable via an API though and temporarily free »for charities and for educational or personal use«.
The rather impudent marketing is acknowledged at the articles end, when stating: »This story was corrected to make clear the images are crowdsourced but the underlying code is not open source.«
Why does adversarial.io tackle this? The answer might be in an text by Eykholt et al.; Robust Physical-World Attacks on Deep Learning Models. https://arxiv.org/abs/1707.08945
Press Relaease »SRI’s Spotting Audio-Visual Inconsistencies (SAVI) techniques detect tampered videos by identifying discrepancies between the audio and visual tracks. For example, the system can detect when lip synchronization is a little off or if there is an unexplained visual “jerk” in the video. Or it can flag a video as possibly tampered if the visual scene is outdoors, but analysis of the reverberation properties of the audio track indicates the recording was done in a small room.
This video shows how the SAVI system detects speaker inconsistencies. First, the system detects the person’s face, tracks it throughout the video clip, and verifies it is the same person for the entire clip. It then detects when she is likely to be speaking by tracking when she is moving her mouth appropriately. «
What happens, if you put a cardbox on. Good news is: While Google Vision recognizes me in other images, it does not so wiht the “Hat” on.
Adam Harvey: »Reminder that simply wearing a baseball cap and looking down at phone creates difficulties for facial recognition systems« commenting on this study:
»FIVE ran 36 prototype algorithms from 16 commercial suppliers on 109 hours of video imagery taken at a variety of settings. The video images included hard-to-match pictures of people looking at smartphones, wearing hats or just looking away from the camera. Lighting was sometimes a problem, and some faces never appeared on the video because they were blocked, for example, by a tall person in front of them.
NIST used the algorithms to match faces from the video to databases populated with photographs of up to 48,000 individuals. People in the videos were not required to look in the direction of the camera. Without this requirement, the technology must compensate for large changes in the appearance of a face and is often less successful. The report notes that even for the more accurate algorithms, subjects may be identified anywhere from around 60 percent of the time to more than 99 percent, depending on video or image quality and the algorithm’s ability to deal with the given scenario.«
The Chaos Communication Congress 2018 delivered plenty of sessions about pattern recognition, deep “learning” and AI. Especially the third talk, »Circumventing video identification using augmented reality« is relevant for adversarial.io.
Forbes Journalist Thomas Brewster looked into standard smartphone face recognition software and how it could detect fake 3‑D faces: »We tested four of the hottest handsets running Google’s [Android] operating systems and Apple’s iPhone to see how easy it’d be to break into them. We did it with a 3D-printed head. All of the Androids opened with the fake. Apple’s phone, however, was impenetrable.«
newer posts | older posts