Umbrellas

Umbrel­las are prac­ti­cal when it comes to avoid auto­mat­ed face recog­ni­tion from CCTV et cetera, since they are every­day items and can’t be effec­tive­ly banned by authorities.

Pro­test­ers spray paint a sur­veil­lance cam­era in Hongkong in July 2019 using umbrellas


Generated Faces

Icons8 prod­uct design­er Kon­stan­tin Zhabin­skiy worked on a project of gen­er­at­ing 100k faces (using GANs) from a total of 29.000 pho­tographs that they pho­tographed in-house. This has the advan­tage of con­sis­tent light­en­ing and being able to pho­to­graph dif­fer­ent angles of the same face. 

For the time being they have open sourced a large data-set hop­ing for trak­tion. It can be used for avatar images and such – so if you ever want­ed to pre­tend you look like a mod­el, no wrin­kles, per­fect light­en­ing, sym­met­ric eyes and such, only a few GAN-glitch­es, go ahead and use them for your account.


Jewelry

“incog­ni­to” is an anti-recog­ni­tion jew­el­ry mask by design stu­dio NOMA, War­saw https://noma-studio.pl/en/incognito/ it revers­es the nose-eye rela­tion and that’s what we like about it. Once could definit­ly go out on street with this.


Covered

This cre­ation by Lon­don based design­er Richard Quinn gets you ful­ly cov­ered. It got some trak­tion, since Car­di B appeared at Paris Fash­ion Week in one of his body and face cov­ers. Maybe a motor­cy­cle hel­met would be still obfus­cat­ing enough, but would you want to wear it on fash­ion week?


Anti Recognition Mask

anti-recog­ni­tion mask by design­er col­lec­tive NOMA, War­saw, https://noma-studio.pl


Surveillance Detection Scout

»Sur­veil­lance Detec­tion Scout is a hard­ware and soft­ware stack that makes use of your Tes­la’s cam­eras to tell you if you’re being fol­lowed in real-time. The name, as you like­ly gath­ered, pays homage to the ever-effec­tive Sur­veil­lance Detec­tion Route. When parked, Scout makes an excel­lent sta­t­ic sur­veil­lance prac­ti­tion­er as well, allow­ing you to run queries and estab­lish pat­terns-of-life on detect­ed persons.« 

Researcher Tru­man Kain there­fore uses Facenet Image Recog­ni­tion train­ing data and Plugs into Tes­las pub­lic API. For License Plate Recog­ni­tion he uses ALPR. To save the imagery cre­at­ed by the three Tes­la front cam­eras, he uses a soft­ware called Tes­la USB.

Wired-Author Andy Green­berg notes: 

»Kain, a con­sul­tant for the secu­ri­ty firm Tevo­ra, also isn’t obliv­i­ous to his cre­ation’s creep fac­tor. He says the Sur­veil­lance Detec­tion Scout demon­strates the kind of sur­veil­lance the data that self-dri­ving cars already col­lect could enable.«

To adversarial.io this presents a use-case where you want to have adver­sar­i­al patch­es on license plates (if that is not for­bid­den by law, because it presents some kind of obfus­ca­tion) and of course wear an adver­sar­i­al t‑shirt of some kind… This case also reminds me of the spec­u­la­tion, that UBER at some point might make their cars more prof­itable, by using them as data col­lec­tion drones.


City of San Francisco sets limits to face recognition

In an unprece­dent­ed move the city of Fran­cis­co, has decid­ed that new face recog­ni­tion projects by the city itself has to be run through their board of super­vi­sors. See the draft law here: https://sfgov.legistar.com/View.ashx?M=F&ID=7206781&GUID=38D37061-4D87-4A94-9AB3-CB113656159A

That means, it does­n’t com­pletet­ly ban face recog­ni­tion as some media sug­gest­ed, but devel­ops a pol­i­cy that will put the acqui­si­tion of face recog­ni­tion tech­nique through the city admin­is­tra­tion under control.

While sur­veil­lance tech­nol­o­gy may threat­en the pri­va­cy of all of us, sur­veil­lance efforts have his­tor­i­cal­ly been used to intim­i­date and oppress cer­tain com­mu­ni­ties and groups more than oth­ers, includ­ing those that are defined by a com­mon race, eth­nic­i­ty, reli­gion, nation­al ori­gin, income lev­el, sex­u­al ori­en­ta­tion, or polit­i­cal perspective.

FILE NO. 190110, Board of super­vi­sors, City of San Fran­cis­co, p.1

Obvi­ous­ly the city has not banned face recog­ni­tion tech­nol­o­gy in gen­er­al, since this would include every smart phone today. Also there is a long list of exemptions: 

Sur­veil­lance Tech­nol­o­gy does not include the fol­low­ing devices, hard­ware, or soft­ware: [long list of basic elec­tron­ic infra­struc­ture, incl. data­bas­es need­ed to run a city].

FILE NO. 190110, Board of super­vi­sors, City of San Fran­cis­co, p.6

We need to talk AI – A Comic Essay on Artificial Intelligence

https://weneedtotalk.ai/

Direct PDF Down­load: weneedtotalkai.files.wordpress.com/2019/06/weneedtotalkai_cc.pdf … by Julia Schnei­der and Lena Kadriye Ziyal gives a great overview on an entry lev­el for those who are less tech­ni­cal­ly inclined, yet still won­der, what is behind the hype.

Unfor­tu­nate­ly what we see as a fea­ture (aka adver­sar­i­al noise), they see as a bug. But hey, this may change.


Fooling automated surveillance cameras: adversarial patches to attack person detection

Simen Thys, Wiebe Van Ranst, Toon Goedemé from KU Leuven/Belgium researched adver­sar­i­al patch­es for mov­ing images and came up with sev­er­al pat­terns that dis­turb detection.

Their attack is direct­ed against a specifc library for mov­ing image recog­ni­tion, called YOLOv2, https://pjreddie.com/darknet/yolov2/

The per­son left gets detect­ed as “per­son”, the per­son right with an adver­sar­i­al patch in front of his stom­age is not rec­og­nized auto­mat­i­cal­ly. Simen Thys/Ranst/Goedeme 2019

Full paper at: https://arxiv.org/pdf/1904.08653.pdf


AI Portraits

Mau­ro Mar­ti­no and Luca Stor­naiuo­lo (MIT-IBM Wat­son AI Lab) have exper­i­ment­ed with GANs to gen­er­ate por­traits of indi­vid­u­als. Basi­cal­ly you upload your own pho­to and the AI com­pares its fea­tures to the set of images from which it was trained from (faces of actors and actress­es ) and then gen­er­ates a new portrait. 

It sounds like an inter­est­ing experiement, but already ear­ly on we note that this »faces of actors and actress­es« dataset is going to be biased in one or the oth­er way: towards race, towards gen­der or towards cer­tain beau­ty fea­tures which are most com­mon among actors/actresses.

https://aiportraits.com/

The aim of this project how­ev­er is not clear, even when the authors add some pseu­do-crit­i­cal com­ments to it: 

The result is an image that exam­ines the con­cept of iden­ti­ty, push­ing the bound­aries between the indi­vid­ual that rec­og­nizes herself/himself and the col­lec­tion of faces from the soci­ety of spec­ta­cle that are sed­i­ment­ed in the neur­al network.

Martino/Stornaiuolo

So, the ques­tion remains: what is won through this project?


MegaPixels

https://megapixels.cc/datasets/

»MegaPix­els is an inde­pen­dent art and research project by Adam Har­vey and Jules LaPlace that inves­ti­gates the ethics, ori­gins, and indi­vid­ual pri­va­cy impli­ca­tions of face recog­ni­tion image datasets and their role in the expan­sion of bio­met­ric sur­veil­lance technologies.«

It’s worth to vis­it this site because it intro­duces, which data sets Machine Learn­ing relies on, and it also rais­es the ques­tion how researchers in this field can be called out for act­ing ethically.

A short while lat­er, Adam Har­vey post­ed this on twit­ter, a good exam­ple of how edu­cat­ing the pub­lic can shake­up bad ethics:


How Youtube almost lost the battle

»Pedro Domin­gos, a pro­fes­sor of com­put­er sci­ence at the Uni­ver­si­ty of Wash­ing­ton, said that arti­fi­cial intel­li­gence is much less sophis­ti­cat­ed than many peo­ple believe, and Sil­i­con Val­ley com­pa­nies often por­tray their sys­tems as more pow­er­ful than they actu­al­ly are as they com­pete for busi­ness. In fact, even the most advanced arti­fi­cial intel­li­gence sys­tems still are fooled in ways that a human would eas­i­ly detect.«

https://www.washingtonpost.com/technology/2019/03/18/inside-youtubes-struggles-shut-down-video-new-zealand-shooting-humans-who-outsmarted-its-systems/?utm_term=.6d192ad26317


The wonderful world of false positives

Youtu­ber Unbox­Ther­a­py unlocks his phones’ face recog­ni­tion with anoth­er phone show­ing his face.


Crowdsourcing without Open Sourcing

»Because any­one can con­tribute to its plat­form, it gets updat­ed every day.« says the CEO. Noth­ing real­ly new from an AI start­up, despite mak­ing head­lines with MITs tech­nol­o­gy review: The com­pa­ny Map­il­lary crowd­sources com­mon knowl­edge to cap­i­tal­ize it by con­vert­ing it to valu­able data that is then cir­cu­lat­ed out of the hand of the com­mons, where it was orig­i­nal­ly situated.

Map­il­lary uses crowd sourced imagery (that is with­out pay­ing for it) to cre­ate addi­tion­al data that would help autonomous cars to dri­ve »more save­ly«. While MIT Tech­nol­o­gy Review tries to describe the com­pa­ny as »Wikipedia of map­ping« it is clear­ly not. The com­pa­ny is pri­vate­ly owned and does­n’t give away the data in the sense of a pub­lic knowl­edge (e.g. donat­ing it to open street maps). Parts of the data is access­able via an API though and tem­porar­i­ly free »for char­i­ties and for edu­ca­tion­al or per­son­al use«. 

The rather impu­dent mar­ket­ing is acknowl­edged at the arti­cles end, when stat­ing: »This sto­ry was cor­rect­ed to make clear the images are crowd­sourced but the under­ly­ing code is not open source.«

https://www.technologyreview.com/s/612825/open-source-maps-should-help-driverless-cars-navigate-our-cities-more-safely/

Why does adversarial.io tack­le this? The answer might be in an text by Eykholt et al.; Robust Phys­i­cal-World Attacks on Deep Learn­ing Mod­els. https://arxiv.org/abs/1707.08945


Deepfake – Video Detection

Press Relaease »SRI’s Spot­ting Audio-Visu­al Incon­sis­ten­cies (SAVI) tech­niques detect tam­pered videos by iden­ti­fy­ing dis­crep­an­cies between the audio and visu­al tracks. For exam­ple, the sys­tem can detect when lip syn­chro­niza­tion is a lit­tle off or if there is an unex­plained visu­al “jerk” in the video. Or it can flag a video as pos­si­bly tam­pered if the visu­al scene is out­doors, but analy­sis of the rever­ber­a­tion prop­er­ties of the audio track indi­cates the record­ing was done in a small room.

This video shows how the SAVI sys­tem detects speak­er incon­sis­ten­cies. First, the sys­tem detects the person’s face, tracks it through­out the video clip, and ver­i­fies it is the same per­son for the entire clip. It then detects when she is like­ly to be speak­ing by track­ing when she is mov­ing her mouth appropriately. «


newer posts | older posts