Before an image can be funneled through a weighted network it needs to be scaled down. Resolutions like 3000x2000 pixels are to large to be processed in computer vision. Current weighted networks operate at 128x128px or at similar resolutions, mostly below 300x300px.
Researcher at TU Braunschweig found that the scaling down process offers opportunity for adversarial pixels. Introduced into the larger originals at strategic points they disturb the scaling down of the image.
These are adversarial attacks on several deep neural networks where a single universal adversarial can fool a model on an entire set of affected inputs. It expects a 90% evasion rate on undefended ImageNet pretrained networks. Kenneth T. Co, Luis Muñoz-González, Leslie Kanthan, Ben Glocker, Emil C. Lupu and described in an paper here: https://arxiv.org/abs/1911.10364
For more check this github repository: https://github.com/kenny-co/sgd-uap-torch#universal-adversarial-perturbations-on-pytorch
This is how they look for different convoluted weigthed networks:
In a way this is a project, which is very close to what we do at adversarial.io. Philipp Schmitt’s Declassifier uses a computer vision algorithm trained on COCO (Common Objects in Context), an image dataset appropriated from Flickr users by Microsoft in 2014.
Within Schmitts’ original photographs certain objects get identified. These regions get overlaid with images that show the same kind of objects, and belong to the COCO data set from which the COCO neural network originally was trained. “If a car is identified in one of the photographs, all the cars included in the dataset that trained the algorithm surface on top of it.” (The Photographers Gallery)
It takes a while to grasp what’s going on, since this project leans to the more artsy side. I loved to play around with it.
When you click on the images a certificate for the original contribution of photography is issued, identifying the original contributor (whose participation get’s lost within the dataset).
Debunking AI Myths
AImyths.org does just that: Looking into several claims about AI and then step by step correct or debunk them. A recommended read!
Brad Dwyer found a lot of missing or omitted labels in a set that is used for training and testing autonomous dryving systems. »We did a hand-check of the 15,000 images in the widely used Udacity Dataset 2 and found problems with 4,986 (33%) of them.« Since this is a Open Source Dataset used primarily for educational purposes, but as the author found out obviously also for test cars on public streets, he published a corrected set at https://public.roboflow.ai/object-detection/self-driving-car
Besides the honorable work of Dwyer, these omissions lead to the larger question of the reliablity of many data sets which are being used for training.
Face-recognition respirator masks
Danielle Baskin created a website for computational mapping to convert facial features into an image printed onto the surface of N95 surgical masks without distortion. It is a reaction to the Corona virus epidemic and allows to unlock (aka trick) face id techniques of smart phones.
KodyKinzie: »Confirming critical facial recognition research by @tahkion regarding #juggalo makeup defeating detection and recognition using the #esp32 and #face_recognition/#openCV Python libraries. Results seem conclusive.«
Teenagers have come up with elaborated schemes to share instagram accounts and produce obfuscating data, in order to look at whatever they want to look at without being tracked individually.
»Each time she refreshed the Explore tab, it was a completely different topic, none of which she was interested in. That’s because Mosley wasn’t the only person using this account — it belonged to a group of her friends, at least five of whom could be on at any given time. Maybe they couldn’t hide their data footprints, but they could at least leave hundreds behind to confuse trackers.« Alfred Ng on Cnet.com
Paint Your Face Away is a drop-in digital face painting workshop by Shinji Toya. The development of the digital face painting tool for this session has been inspired by Frank Bowling’s paintings. Participants use the painter to create their profile pictures while running a real-time face detection on the image of a face being painted so that at one point the profile picture stops being detected by the computer vision through the painting process. In this way, the digital paint acts as a type of disruptive noise for the machine.
Artist Simon Weckert generates poison data by transporting 99 second hand smartphones in a handcart and generates a virtual traffic jam in Google Maps. Through this activity he shows that it is possible to turn a green street red. This in turn has an impact in the physical world by navigating cars on another route to avoid being stuck in traffic. Simon U Rock!
Umbrellas are practical when it comes to avoid automated face recognition from CCTV et cetera, since they are everyday items and can’t be effectively banned by authorities.
Icons8 product designer Konstantin Zhabinskiy worked on a project of generating 100k faces (using GANs) from a total of 29.000 photographs that they photographed in-house. This has the advantage of consistent lightening and being able to photograph different angles of the same face.
For the time being they have open sourced a large data-set hoping for traktion. It can be used for avatar images and such – so if you ever wanted to pretend you look like a model, no wrinkles, perfect lightening, symmetric eyes and such, only a few GAN-glitches, go ahead and use them for your account.