That means, it doesn’t completetly ban face recognition as some media suggested, but develops a policy that will put the acquisition of face recognition technique through the city administration under control.
While surveillance technology may threaten the privacy of all of us, surveillance efforts have historically been used to intimidate and oppress certain communities and groups more than others, including those that are defined by a common race, ethnicity, religion, national origin, income level, sexual orientation, or political perspective.
FILE NO. 190110, Board of supervisors, City of San Francisco, p.1
Obviously the city has not banned face recognition technology in general, since this would include every smart phone today. Also there is a long list of exemptions:
Surveillance Technology does not include the following devices, hardware, or software: [long list of basic electronic infrastructure, incl. databases needed to run a city].
FILE NO. 190110, Board of supervisors, City of San Francisco, p.6
We need to talk AI – A Comic Essay on Artificial Intelligence
Mauro Martino and Luca Stornaiuolo (MIT-IBM Watson AI Lab) have experimented with GANs to generate portraits of individuals. Basically you upload your own photo and the AI compares its features to the set of images from which it was trained from (faces of actors and actresses ) and then generates a new portrait.
It sounds like an interesting experiement, but already early on we note that this »faces of actors and actresses« dataset is going to be biased in one or the other way: towards race, towards gender or towards certain beauty features which are most common among actors/actresses.
The aim of this project however is not clear, even when the authors add some pseudo-critical comments to it:
The result is an image that examines the concept of identity, pushing the boundaries between the individual that recognizes herself/himself and the collection of faces from the society of spectacle that are sedimented in the neural network.
So, the question remains: what is won through this project?
»MegaPixels is an independent art and research project by Adam Harvey and Jules LaPlace that investigates the ethics, origins, and individual privacy implications of face recognition image datasets and their role in the expansion of biometric surveillance technologies.«
It’s worth to visit this site because it introduces, which data sets Machine Learning relies on, and it also raises the question how researchers in this field can be called out for acting ethically.
A short while later, Adam Harvey posted this on twitter, a good example of how educating the public can shakeup bad ethics:
How Youtube almost lost the battle
»Pedro Domingos, a professor of computer science at the University of Washington, said that artificial intelligence is much less sophisticated than many people believe, and Silicon Valley companies often portray their systems as more powerful than they actually are as they compete for business. In fact, even the most advanced artificial intelligence systems still are fooled in ways that a human would easily detect.«
Youtuber UnboxTherapy unlocks his phones’ face recognition with another phone showing his face.
Crowdsourcing without Open Sourcing
»Because anyone can contribute to its platform, it gets updated every day.« says the CEO. Nothing really new from an AI startup, despite making headlines with MITs technology review: The company Mapillary crowdsources common knowledge to capitalize it by converting it to valuable data that is then circulated out of the hand of the commons, where it was originally situated.
Mapillary uses crowd sourced imagery (that is without paying for it) to create additional data that would help autonomous cars to drive »more savely«. While MIT Technology Review tries to describe the company as »Wikipedia of mapping« it is clearly not. The company is privately owned and doesn’t give away the data in the sense of a public knowledge (e.g. donating it to open street maps). Parts of the data is accessable via an API though and temporarily free »for charities and for educational or personal use«.
The rather impudent marketing is acknowledged at the articles end, when stating: »This story was corrected to make clear the images are crowdsourced but the underlying code is not open source.«
Why does adversarial.io tackle this? The answer might be in an text by Eykholt et al.; Robust Physical-World Attacks on Deep Learning Models. https://arxiv.org/abs/1707.08945
Deepfake – Video Detection
Press Relaease »SRI’s Spotting Audio-Visual Inconsistencies (SAVI) techniques detect tampered videos by identifying discrepancies between the audio and visual tracks. For example, the system can detect when lip synchronization is a little off or if there is an unexplained visual “jerk” in the video. Or it can flag a video as possibly tampered if the visual scene is outdoors, but analysis of the reverberation properties of the audio track indicates the recording was done in a small room.
This video shows how the SAVI system detects speaker inconsistencies. First, the system detects the person’s face, tracks it throughout the video clip, and verifies it is the same person for the entire clip. It then detects when she is likely to be speaking by tracking when she is moving her mouth appropriately. «
Testing Image Recognition
What happens, if you put a cardbox on. Good news is: While Google Vision recognizes me in other images, it does not so wiht the “Hat” on.
Meanwhile on Twitter
Adam Harvey: »Reminder that simply wearing a baseball cap and looking down at phone creates difficulties for facial recognition systems« commenting on this study:
»FIVE ran 36 prototype algorithms from 16 commercial suppliers on 109 hours of video imagery taken at a variety of settings. The video images included hard-to-match pictures of people looking at smartphones, wearing hats or just looking away from the camera. Lighting was sometimes a problem, and some faces never appeared on the video because they were blocked, for example, by a tall person in front of them.
NIST used the algorithms to match faces from the video to databases populated with photographs of up to 48,000 individuals. People in the videos were not required to look in the direction of the camera. Without this requirement, the technology must compensate for large changes in the appearance of a face and is often less successful. The report notes that even for the more accurate algorithms, subjects may be identified anywhere from around 60 percent of the time to more than 99 percent, depending on video or image quality and the algorithm’s ability to deal with the given scenario.«
The Chaos Communication Congress 2018 delivered plenty of sessions about pattern recognition, deep “learning” and AI. Especially the third talk, »Circumventing video identification using augmented reality« is relevant for adversarial.io.
Smartphone Face recognition tricked
Forbes Journalist Thomas Brewster looked into standard smartphone face recognition software and how it could detect fake 3‑D faces: »We tested four of the hottest handsets running Google’s [Android] operating systems and Apple’s iPhone to see how easy it’d be to break into them. We did it with a 3D-printed head. All of the Androids opened with the fake. Apple’s phone, however, was impenetrable.«