Invisibility Cloak

This styl­ish pullover is a great way to stay warm this win­ter, whether in the office or on-the-go. It fea­tures a stay-dry microfleece lin­ing, a mod­ern fit, and adver­sar­i­al pat­terns the evade most com­mon object detec­tors. In this demon­stra­tion, the YOLOv2 detec­tor is evad­ed using a pat­tern trained on the COCO dataset with a care­ful­ly con­struct­ed objective.

The paper “Mak­ing an Invis­i­bil­i­ty Cloak: Real World Adver­sar­i­al Attacks on Object Detec­tors” by
Zux­u­an Wu, Ser-Nam Lim, Lar­ry S. Davis, and Tom Gold­stein is online at on https://arxiv.org/abs/1910.14667

Par­tial­ly fund­ed by Face­book AI

Adversarial Clothes

Ital­ian Start Up ‘Capa­ble’ found­ed by Rachele Didero, Fed­er­i­ca Busani and Gio­van­ni Maria Con­ti pro­vides beau­ti­ful, rather pricey, adver­sar­i­al clothes.

https://www.capable.design/shop

Image-Scaling Attacks in Machine Learning

https://scaling-attacks.net/

Before an image can be fun­neled through a weight­ed net­work it needs to be scaled down. Res­o­lu­tions like 3000x2000 pix­els are to large to be processed in com­put­er vision. Cur­rent weight­ed net­works oper­ate at 128x128px or at sim­i­lar res­o­lu­tions, most­ly below 300x300px.

Researcher at TU Braun­schweig found that the scal­ing down process offers oppor­tu­ni­ty for adver­sar­i­al pix­els. Intro­duced into the larg­er orig­i­nals at strate­gic points they dis­turb the scal­ing down of the image. 

Universal Adversarials

These are adver­sar­i­al attacks on sev­er­al deep neur­al net­works where a sin­gle uni­ver­sal adver­sar­i­al can fool a mod­el on an entire set of affect­ed inputs. It expects a 90% eva­sion rate on unde­fend­ed Ima­geNet pre­trained net­works. Ken­neth T. Co, Luis Muñoz-González, Leslie Kan­than, Ben Glock­er, Emil C. Lupu and described in an paper here: https://arxiv.org/abs/1911.10364

For more check this github repos­i­to­ry: https://github.com/kenny-co/sgd-uap-torch#universal-adversarial-perturbations-on-pytorch

This is how they look for dif­fer­ent con­vo­lut­ed weigthed networks:

Declassifier

Declassifier

https://thephotographersgallery.org.uk/declassifier

In a way this is a project, which is very close to what we do at adversarial.io. Philipp Schmitt’s Declas­si­fi­er uses a com­put­er vision algo­rithm trained on COCO (Com­mon Objects in Con­text), an image dataset appro­pri­at­ed from Flickr users by Microsoft in 2014. 

With­in Schmitts’ orig­i­nal pho­tographs cer­tain objects get iden­ti­fied. These regions get over­laid with images that show the same kind of objects, and belong to the COCO data set from which the COCO neur­al net­work orig­i­nal­ly was trained. “If a car is iden­ti­fied in one of the pho­tographs, all the cars includ­ed in the dataset that trained the algo­rithm sur­face on top of it.” (The Pho­tog­ra­phers Gallery)

It takes a while to grasp what’s going on, since this project leans to the more art­sy side. I loved to play around with it.

When you click on the images a cer­tifi­cate for the orig­i­nal con­tri­bu­tion of pho­tog­ra­phy is issued, iden­ti­fy­ing the orig­i­nal con­trib­u­tor (whose par­tic­i­pa­tion get’s lost with­in the dataset).

Certificate

Debunking AI Myths

AImyths.org does just that: Look­ing into sev­er­al claims about AI and then step by step cor­rect or debunk them. A rec­om­mend­ed read!