Invisibility Cloak

This styl­ish pullover is a great way to stay warm this win­ter, whether in the office or on-the-go. It fea­tures a stay-dry microfleece lin­ing, a mod­ern fit, and adver­sar­i­al pat­terns the evade most com­mon object detec­tors. In this demon­stra­tion, the YOLOv2 detec­tor is evad­ed using a pat­tern trained on the COCO dataset with a care­ful­ly con­struct­ed objective.

The paper “Mak­ing an Invis­i­bil­i­ty Cloak: Real World Adver­sar­i­al Attacks on Object Detec­tors” by
Zux­u­an Wu, Ser-Nam Lim, Lar­ry S. Davis, and Tom Gold­stein is online at on https://arxiv.org/abs/1910.14667

Par­tial­ly fund­ed by Face­book AI


Adversarial Clothes

Ital­ian Start Up ‘Capa­ble’ found­ed by Rachele Didero, Fed­er­i­ca Busani and Gio­van­ni Maria Con­ti pro­vides beau­ti­ful, rather pricey, adver­sar­i­al clothes.

https://www.capable.design/shop


Image-Scaling Attacks in Machine Learning

https://scaling-attacks.net/

Before an image can be fun­neled through a weight­ed net­work it needs to be scaled down. Res­o­lu­tions like 3000x2000 pix­els are to large to be processed in com­put­er vision. Cur­rent weight­ed net­works oper­ate at 128x128px or at sim­i­lar res­o­lu­tions, most­ly below 300x300px.

Researcher at TU Braun­schweig found that the scal­ing down process offers oppor­tu­ni­ty for adver­sar­i­al pix­els. Intro­duced into the larg­er orig­i­nals at strate­gic points they dis­turb the scal­ing down of the image. 


Universal Adversarials

These are adver­sar­i­al attacks on sev­er­al deep neur­al net­works where a sin­gle uni­ver­sal adver­sar­i­al can fool a mod­el on an entire set of affect­ed inputs. It expects a 90% eva­sion rate on unde­fend­ed Ima­geNet pre­trained net­works. Ken­neth T. Co, Luis Muñoz-González, Leslie Kan­than, Ben Glock­er, Emil C. Lupu and described in an paper here: https://arxiv.org/abs/1911.10364

For more check this github repos­i­to­ry: https://github.com/kenny-co/sgd-uap-torch#universal-adversarial-perturbations-on-pytorch

This is how they look for dif­fer­ent con­vo­lut­ed weigthed networks:


Declassifier

Declassifier

https://thephotographersgallery.org.uk/declassifier

In a way this is a project, which is very close to what we do at adversarial.io. Philipp Schmitt’s Declas­si­fi­er uses a com­put­er vision algo­rithm trained on COCO (Com­mon Objects in Con­text), an image dataset appro­pri­at­ed from Flickr users by Microsoft in 2014. 

With­in Schmitts’ orig­i­nal pho­tographs cer­tain objects get iden­ti­fied. These regions get over­laid with images that show the same kind of objects, and belong to the COCO data set from which the COCO neur­al net­work orig­i­nal­ly was trained. “If a car is iden­ti­fied in one of the pho­tographs, all the cars includ­ed in the dataset that trained the algo­rithm sur­face on top of it.” (The Pho­tog­ra­phers Gallery)

It takes a while to grasp what’s going on, since this project leans to the more art­sy side. I loved to play around with it.

When you click on the images a cer­tifi­cate for the orig­i­nal con­tri­bu­tion of pho­tog­ra­phy is issued, iden­ti­fy­ing the orig­i­nal con­trib­u­tor (whose par­tic­i­pa­tion get’s lost with­in the dataset).

Certificate

Debunking AI Myths

AImyths.org does just that: Look­ing into sev­er­al claims about AI and then step by step cor­rect or debunk them. A rec­om­mend­ed read!


Omitted Labels

red high­light­ed objects/persons were miss­ing in a dataset cru­cial for autonomous driving

Brad Dwyer found a lot of miss­ing or omit­ted labels in a set that is used for train­ing and test­ing autonomous dryv­ing sys­tems. »We did a hand-check of the 15,000 images in the wide­ly used Udac­i­ty Dataset 2 and found prob­lems with 4,986 (33%) of them.« Since this is a Open Source Dataset used pri­mar­i­ly for edu­ca­tion­al pur­pos­es, but as the author found out obvi­ous­ly also for test cars on pub­lic streets, he pub­lished a cor­rect­ed set at https://public.roboflow.ai/object-detection/self-driving-car

Besides the hon­or­able work of Dwyer, these omis­sions lead to the larg­er ques­tion of the reli­abli­ty of many data sets which are being used for training.


Face-recognition respirator masks

Danielle Baskin cre­at­ed a web­site for com­pu­ta­tion­al map­ping to con­vert facial fea­tures into an image print­ed onto the sur­face of N95 sur­gi­cal masks with­out dis­tor­tion. It is a reac­tion to the Coro­na virus epi­dem­ic and allows to unlock (aka trick) face id tech­niques of smart phones.

https://faceidmasks.com


Tricking OpenCV

KodyK­inzie: »Con­firm­ing crit­i­cal facial recog­ni­tion research by @tahkion regard­ing #jug­ga­lo make­up defeat­ing detec­tion and recog­ni­tion using the #esp32 and #face_recognition/#openCV Python libraries. Results seem conclusive.«

Make sure to read the thread: https://twitter.com/KodyKinzie/status/1230732317515120646


Obfuscation of data through using group accounts

Teenagers have come up with elab­o­rat­ed schemes to share insta­gram accounts and pro­duce obfus­cat­ing data, in order to look at what­ev­er they want to look at with­out being tracked individually. 

»Each time she refreshed the Explore tab, it was a com­plete­ly dif­fer­ent top­ic, none of which she was inter­est­ed in. That’s because Mosley was­n’t the only per­son using this account — it belonged to a group of her friends, at least five of whom could be on at any giv­en time. Maybe they could­n’t hide their data foot­prints, but they could at least leave hun­dreds behind to con­fuse track­ers.« Alfred Ng on Cnet.com

Read Full arti­cle here: https://www.cnet.com/news/teens-have-figured-out-how-to-mess-with-instagrams-tracking-algorithm/


Paint Your Face Away workshop

Paint Your Face Away is a drop-in dig­i­tal face paint­ing work­shop by Shin­ji Toya. The devel­op­ment of the dig­i­tal face paint­ing tool for this ses­sion has been inspired by Frank Bowling’s paint­ings. Par­tic­i­pants use the painter to cre­ate their pro­file pic­tures while run­ning a real-time face detec­tion on the image of a face being paint­ed so that at one point the pro­file pic­ture stops being detect­ed by the com­put­er vision through the paint­ing process. In this way, the dig­i­tal paint acts as a type of dis­rup­tive noise for the machine.

Read fur­ther at https://shinjitoya.com/paint-your-face-away/



Google Maps Traffic Jam


Artist Simon Weck­ert gen­er­ates poi­son data by trans­port­ing 99 sec­ond hand smart­phones in a hand­cart and gen­er­ates a vir­tu­al traf­fic jam in Google Maps. Through this activ­i­ty he shows that it is pos­si­ble to turn a green street red. This in turn has an impact in the phys­i­cal world by nav­i­gat­ing cars on anoth­er route to avoid being stuck in traf­fic. Simon U Rock!

https://www.simonweckert.com/googlemapshacks.html




older posts