Image-Scaling Attacks in Machine Learning

Before an image can be fun­neled through a weight­ed net­work it needs to be scaled down. Res­o­lu­tions like 3000x2000 pix­els are to large to be processed in com­put­er vision. Cur­rent weight­ed net­works oper­ate at 128x128px or at sim­i­lar res­o­lu­tions, most­ly below 300x300px.

Researcher at TU Braun­schweig found that the scal­ing down process offers oppor­tu­ni­ty for adver­sar­i­al pix­els. Intro­duced into the larg­er orig­i­nals at strate­gic points they dis­turb the scal­ing down of the image. 

Universal Adversarials

These are adver­sar­i­al attacks on sev­er­al deep neur­al net­works where a sin­gle uni­ver­sal adver­sar­i­al can fool a mod­el on an entire set of affect­ed inputs. It expects a 90% eva­sion rate on unde­fend­ed Ima­geNet pre­trained net­works. Ken­neth T. Co, Luis Muñoz-González, Leslie Kan­than, Ben Glock­er, Emil C. Lupu and described in an paper here:

For more check this github repos­i­to­ry:

This is how they look for dif­fer­ent con­vo­lut­ed weigthed networks:



In a way this is a project, which is very close to what we do at Philipp Schmitt’s Declas­si­fi­er uses a com­put­er vision algo­rithm trained on COCO (Com­mon Objects in Con­text), an image dataset appro­pri­at­ed from Flickr users by Microsoft in 2014. 

With­in Schmitts’ orig­i­nal pho­tographs cer­tain objects get iden­ti­fied. These regions get over­laid with images that show the same kind of objects, and belong to the COCO data set from which the COCO neur­al net­work orig­i­nal­ly was trained. “If a car is iden­ti­fied in one of the pho­tographs, all the cars includ­ed in the dataset that trained the algo­rithm sur­face on top of it.” (The Pho­tog­ra­phers Gallery)

It takes a while to grasp what’s going on, since this project leans to the more art­sy side. I loved to play around with it.

When you click on the images a cer­tifi­cate for the orig­i­nal con­tri­bu­tion of pho­tog­ra­phy is issued, iden­ti­fy­ing the orig­i­nal con­trib­u­tor (whose par­tic­i­pa­tion get’s lost with­in the dataset).


Debunking AI Myths does just that: Look­ing into sev­er­al claims about AI and then step by step cor­rect or debunk them. A rec­om­mend­ed read!