Universal Adversarials

These are adver­sar­i­al attacks on sev­er­al deep neur­al net­works where a sin­gle uni­ver­sal adver­sar­i­al can fool a mod­el on an entire set of affect­ed inputs. It expects a 90% eva­sion rate on unde­fend­ed Ima­geNet pre­trained net­works. Ken­neth T. Co, Luis Muñoz-González, Leslie Kan­than, Ben Glock­er, Emil C. Lupu and described in an paper here: https://arxiv.org/abs/1911.10364

For more check this github repos­i­to­ry: https://github.com/kenny-co/sgd-uap-torch#universal-adversarial-perturbations-on-pytorch

This is how they look for dif­fer­ent con­vo­lut­ed weigthed networks:




In a way this is a project, which is very close to what we do at adversarial.io. Philipp Schmitt’s Declas­si­fi­er uses a com­put­er vision algo­rithm trained on COCO (Com­mon Objects in Con­text), an image dataset appro­pri­at­ed from Flickr users by Microsoft in 2014. 

With­in Schmitts’ orig­i­nal pho­tographs cer­tain objects get iden­ti­fied. These regions get over­laid with images that show the same kind of objects, and belong to the COCO data set from which the COCO neur­al net­work orig­i­nal­ly was trained. “If a car is iden­ti­fied in one of the pho­tographs, all the cars includ­ed in the dataset that trained the algo­rithm sur­face on top of it.” (The Pho­tog­ra­phers Gallery)

It takes a while to grasp what’s going on, since this project leans to the more art­sy side. I loved to play around with it.

When you click on the images a cer­tifi­cate for the orig­i­nal con­tri­bu­tion of pho­tog­ra­phy is issued, iden­ti­fy­ing the orig­i­nal con­trib­u­tor (whose par­tic­i­pa­tion get’s lost with­in the dataset).


Debunking AI Myths

AImyths.org does just that: Look­ing into sev­er­al claims about AI and then step by step cor­rect or debunk them. A rec­om­mend­ed read!