Image-Scaling Attacks in Machine Learning

Before an image can be fun­neled through a weight­ed net­work it needs to be scaled down. Res­o­lu­tions like 3000x2000 pix­els are to large to be processed in com­put­er vision. Cur­rent weight­ed net­works oper­ate at 128x128px or at sim­i­lar res­o­lu­tions, most­ly below 300x300px.

Researcher at TU Braun­schweig found that the scal­ing down process offers oppor­tu­ni­ty for adver­sar­i­al pix­els. Intro­duced into the larg­er orig­i­nals at strate­gic points they dis­turb the scal­ing down of the image. 

Universal Adversarials

These are adver­sar­i­al attacks on sev­er­al deep neur­al net­works where a sin­gle uni­ver­sal adver­sar­i­al can fool a mod­el on an entire set of affect­ed inputs. It expects a 90% eva­sion rate on unde­fend­ed Ima­geNet pre­trained net­works. Ken­neth T. Co, Luis Muñoz-González, Leslie Kan­than, Ben Glock­er, Emil C. Lupu and described in an paper here:

For more check this github repos­i­to­ry:

This is how they look for dif­fer­ent con­vo­lut­ed weigthed networks:



In a way this is a project, which is very close to what we do at Philipp Schmitt’s Declas­si­fi­er uses a com­put­er vision algo­rithm trained on COCO (Com­mon Objects in Con­text), an image dataset appro­pri­at­ed from Flickr users by Microsoft in 2014. 

With­in Schmitts’ orig­i­nal pho­tographs cer­tain objects get iden­ti­fied. These regions get over­laid with images that show the same kind of objects, and belong to the COCO data set from which the COCO neur­al net­work orig­i­nal­ly was trained. “If a car is iden­ti­fied in one of the pho­tographs, all the cars includ­ed in the dataset that trained the algo­rithm sur­face on top of it.” (The Pho­tog­ra­phers Gallery)

It takes a while to grasp what’s going on, since this project leans to the more art­sy side. I loved to play around with it.

When you click on the images a cer­tifi­cate for the orig­i­nal con­tri­bu­tion of pho­tog­ra­phy is issued, iden­ti­fy­ing the orig­i­nal con­trib­u­tor (whose par­tic­i­pa­tion get’s lost with­in the dataset).


Debunking AI Myths does just that: Look­ing into sev­er­al claims about AI and then step by step cor­rect or debunk them. A rec­om­mend­ed read!

Omitted Labels

red high­light­ed objects/persons were miss­ing in a dataset cru­cial for autonomous driving

Brad Dwyer found a lot of miss­ing or omit­ted labels in a set that is used for train­ing and test­ing autonomous dryv­ing sys­tems. »We did a hand-check of the 15,000 images in the wide­ly used Udac­i­ty Dataset 2 and found prob­lems with 4,986 (33%) of them.« Since this is a Open Source Dataset used pri­mar­i­ly for edu­ca­tion­al pur­pos­es, but as the author found out obvi­ous­ly also for test cars on pub­lic streets, he pub­lished a cor­rect­ed set at

Besides the hon­or­able work of Dwyer, these omis­sions lead to the larg­er ques­tion of the reli­abli­ty of many data sets which are being used for training.

Face-recognition respirator masks

Danielle Baskin cre­at­ed a web­site for com­pu­ta­tion­al map­ping to con­vert facial fea­tures into an image print­ed onto the sur­face of N95 sur­gi­cal masks with­out dis­tor­tion. It is a reac­tion to the Coro­na virus epi­dem­ic and allows to unlock (aka trick) face id tech­niques of smart phones.

Tricking OpenCV

KodyK­inzie: »Con­firm­ing crit­i­cal facial recog­ni­tion research by @tahkion regard­ing #jug­ga­lo make­up defeat­ing detec­tion and recog­ni­tion using the #esp32 and #face_recognition/#openCV Python libraries. Results seem conclusive.«

Make sure to read the thread:

Obfuscation of data through using group accounts

Teenagers have come up with elab­o­rat­ed schemes to share insta­gram accounts and pro­duce obfus­cat­ing data, in order to look at what­ev­er they want to look at with­out being tracked individually. 

»Each time she refreshed the Explore tab, it was a com­plete­ly dif­fer­ent top­ic, none of which she was inter­est­ed in. That’s because Mosley was­n’t the only per­son using this account — it belonged to a group of her friends, at least five of whom could be on at any giv­en time. Maybe they could­n’t hide their data foot­prints, but they could at least leave hun­dreds behind to con­fuse track­ers.« Alfred Ng on

Read Full arti­cle here:

Paint Your Face Away workshop

Paint Your Face Away is a drop-in dig­i­tal face paint­ing work­shop by Shin­ji Toya. The devel­op­ment of the dig­i­tal face paint­ing tool for this ses­sion has been inspired by Frank Bowling’s paint­ings. Par­tic­i­pants use the painter to cre­ate their pro­file pic­tures while run­ning a real-time face detec­tion on the image of a face being paint­ed so that at one point the pro­file pic­ture stops being detect­ed by the com­put­er vision through the paint­ing process. In this way, the dig­i­tal paint acts as a type of dis­rup­tive noise for the machine.

Read fur­ther at

Google Maps Traffic Jam

Artist Simon Weck­ert gen­er­ates poi­son data by trans­port­ing 99 sec­ond hand smart­phones in a hand­cart and gen­er­ates a vir­tu­al traf­fic jam in Google Maps. Through this activ­i­ty he shows that it is pos­si­ble to turn a green street red. This in turn has an impact in the phys­i­cal world by nav­i­gat­ing cars on anoth­er route to avoid being stuck in traf­fic. Simon U Rock!


Umbrel­las are prac­ti­cal when it comes to avoid auto­mat­ed face recog­ni­tion from CCTV et cetera, since they are every­day items and can’t be effec­tive­ly banned by authorities.

Pro­test­ers spray paint a sur­veil­lance cam­era in Hongkong in July 2019 using umbrellas

Generated Faces

Icons8 prod­uct design­er Kon­stan­tin Zhabin­skiy worked on a project of gen­er­at­ing 100k faces (using GANs) from a total of 29.000 pho­tographs that they pho­tographed in-house. This has the advan­tage of con­sis­tent light­en­ing and being able to pho­to­graph dif­fer­ent angles of the same face. 

For the time being they have open sourced a large data-set hop­ing for trak­tion. It can be used for avatar images and such – so if you ever want­ed to pre­tend you look like a mod­el, no wrin­kles, per­fect light­en­ing, sym­met­ric eyes and such, only a few GAN-glitch­es, go ahead and use them for your account.

older posts