Before an image can be funneled through a weighted network it needs to be scaled down. Resolutions like 3000x2000 pixels are to large to be processed in computer vision. Current weighted networks operate at 128x128px or at similar resolutions, mostly below 300x300px.
Researcher at TU Braunschweig found that the scaling down process offers opportunity for adversarial pixels. Introduced into the larger originals at strategic points they disturb the scaling down of the image.
Universal Adversarials
These are adversarial attacks on several deep neural networks where a single universal adversarial can fool a model on an entire set of affected inputs. It expects a 90% evasion rate on undefended ImageNet pretrained networks. Kenneth T. Co, Luis Muñoz-González, Leslie Kanthan, Ben Glocker, Emil C. Lupu and described in an paper here: https://arxiv.org/abs/1911.10364
For more check this github repository: https://github.com/kenny-co/sgd-uap-torch#universal-adversarial-perturbations-on-pytorch
This is how they look for different convoluted weigthed networks:
In a way this is a project, which is very close to what we do at adversarial.io. Philipp Schmitt’s Declassifier uses a computer vision algorithm trained on COCO (Common Objects in Context), an image dataset appropriated from Flickr users by Microsoft in 2014.
Within Schmitts’ original photographs certain objects get identified. These regions get overlaid with images that show the same kind of objects, and belong to the COCO data set from which the COCO neural network originally was trained. “If a car is identified in one of the photographs, all the cars included in the dataset that trained the algorithm surface on top of it.” (The Photographers Gallery)
It takes a while to grasp what’s going on, since this project leans to the more artsy side. I loved to play around with it.
When you click on the images a certificate for the original contribution of photography is issued, identifying the original contributor (whose participation get’s lost within the dataset).
Debunking AI Myths
AImyths.org does just that: Looking into several claims about AI and then step by step correct or debunk them. A recommended read!