Adversarial.io is an easy-to-use webapp for altering image material, in order to make it machine-unreadable.
Through introducing perturbance, adversarial.io seeks to question and subvert automated image recognition.
For each uploaded image an Artificial Intelligence »neural network« calculates the description (i.e. »tabby«).
Then an adersarial algorithm calculates a noise pattern, that moves the description class towards the next class (i.e. »lynx«).
This adversarial noise is a slight alteration, moving the machine perception, over a certain threshold towards another description of the image.
While machine vision is tricked, the human eye is able to compensate for the introduced noise.
From Tactical Media to Strategic Infrastructures
Adversarial.io positions itself in the tradition of tactical media interventions by 1990s.
»Tactical Media, the post-Berlin Wall child of multi-media and internet practiced by activists, designers and artists, hackers, and video enthusisasts, refused to make history« (Lovink/Rossiter 2018:18)
Yet, instead of a hit-and-run media action, adversarial.io seeks to build infrastructure strategically, looking for possible alliances against the evil of image surveillance. If this resonates with you, join the forces!
Adversarial.io looks for civic or public, non-commercial processing infrastructure for machine learning techniques. Let us know.
The project is asking questions about the materiality of data by exposing how computer vision works and how it can be tricked. It is subverting abstract technological processes by making them visible and explaining them. And it is challenging normative assumptions, by calling out the norms – in this case the image classes.
1.) Allows you to test your own images against the Inception V3 pattern recognition model.
2.) Create a scalable, easy-to-use solution, which demonstrates how AI pattern recognition fails and how stealth methods can be deployed.
FAQ – Frequently Asked Questions
Don’t you think, you support automated computer vision by supplying it with test cases?
Currently no. At the moment we’re building on published research that has been undertaken by tech companies in order to strengthen their products against adversarial attacks.
We are not yet inventing new ones and just make existing adversarial attacks availble to a broader public.
The largest test case for adversarial images to date have been noisy images in captchas, to train machine vision against them.
Adversarial.io seems to be just destructive, can’t you do something positive?
There has been a lot of discussion about bias in AI, which leads to the misrepresentation of minorities, the fixation of the future in the past (because AI is trained on past events), the normative power of describing reality through attributions and so on.
We have seen ridicoulous examples where failure rates of 40% were sold as a success (while a failure rate of 0.5% would be acceptable), lots of false positives in image recogntion and so on.
The positive thing about adversarial.io is that it questions the wrong assumption of »automation is objective«, and it subverts systems, which shouldn’t be in productive use at their current stage.
Doesn’t your project help closing existing loop holes?
Adversarial.io is an educational resource to inform about adversarial techniques and make public what is currently still an expert discussion.
The big tech companies undertake their own adversarial research, compared to this we are a minor player.
What’s your tech?
We use open source software and stay independent from major tech companies.
Software: The front-end consists of a WordPress Content Management System, the back-end uses Python with the Flask framework and Pytorch.
Hardware: A standard off-the rack small server with minimal footprint.
We coordinate through gitea, so if you want to join us, you can easily.