City of San Francisco sets limits to face recognition

In an unprece­dent­ed move the city of Fran­cis­co, has decid­ed that new face recog­ni­tion projects by the city itself has to be run through their board of super­vi­sors. See the draft law here: https://sfgov.legistar.com/View.ashx?M=F&ID=7206781&GUID=38D37061-4D87-4A94-9AB3-CB113656159A

That means, it does­n’t com­pletet­ly ban face recog­ni­tion as some media sug­gest­ed, but devel­ops a pol­i­cy that will put the acqui­si­tion of face recog­ni­tion tech­nique through the city admin­is­tra­tion under con­trol.

While sur­veil­lance tech­nol­o­gy may threat­en the pri­va­cy of all of us, sur­veil­lance efforts have his­tor­i­cal­ly been used to intim­i­date and oppress cer­tain com­mu­ni­ties and groups more than oth­ers, includ­ing those that are defined by a com­mon race, eth­nic­i­ty, reli­gion, nation­al ori­gin, income lev­el, sex­u­al ori­en­ta­tion, or polit­i­cal per­spec­tive.

FILE NO. 190110, Board of super­vi­sors, City of San Fran­cis­co, p.1

Obvi­ous­ly the city has not banned face recog­ni­tion tech­nol­o­gy in gen­er­al, since this would include every smart phone today. Also there is a long list of exemp­tions:

Sur­veil­lance Tech­nol­o­gy does not include the fol­low­ing devices, hard­ware, or soft­ware: [long list of basic elec­tron­ic infra­struc­ture, incl. data­bas­es need­ed to run a city].

FILE NO. 190110, Board of super­vi­sors, City of San Fran­cis­co, p.6

We need to talk AI – A Comic Essay on Artificial Intelligence

https://weneedtotalk.ai/

Direct PDF Down­load: weneedtotalkai.files.wordpress.com/2019/06/weneedtotalkai_cc.pdf … by Julia Schnei­der and Lena Kadriye Ziyal gives a great overview on an entry lev­el for those who are less tech­ni­cal­ly inclined, yet still won­der, what is behind the hype.

Unfor­tu­nate­ly what we see as a fea­ture (aka adver­sar­i­al noise), they see as a bug. But hey, this may change.


Fooling automated surveillance cameras: adversarial patches to attack person detection

Simen Thys, Wiebe Van Ranst, Toon Goedemé from KU Leuven/Belgium researched adver­sar­i­al patch­es for mov­ing images and came up with sev­er­al pat­terns that dis­turb detec­tion.

Their attack is direct­ed against a specifc library for mov­ing image recog­ni­tion, called YOLOv2, https://pjreddie.com/darknet/yolov2/

The per­son left gets detect­ed as “per­son”, the per­son right with an adver­sar­i­al patch in front of his stom­age is not rec­og­nized auto­mat­i­cal­ly. Simen Thys/Ranst/Goedeme 2019

Full paper at: https://arxiv.org/pdf/1904.08653.pdf


AI Portraits

Mau­ro Mar­ti­no and Luca Stor­naiuo­lo (MIT-IBM Wat­son AI Lab) have exper­i­ment­ed with GANs to gen­er­ate por­traits of indi­vid­u­als. Basi­cal­ly you upload your own pho­to and the AI com­pares its fea­tures to the set of images from which it was trained from (faces of actors and actress­es ) and then gen­er­ates a new por­trait.

It sounds like an inter­est­ing experiement, but already ear­ly on we note that this »faces of actors and actress­es« dataset is going to be biased in one or the oth­er way: towards race, towards gen­der or towards cer­tain beau­ty fea­tures which are most com­mon among actors/actresses.

https://aiportraits.com/

The aim of this project how­ev­er is not clear, even when the authors add some pseu­do-crit­i­cal com­ments to it:

The result is an image that exam­ines the con­cept of iden­ti­ty, push­ing the bound­aries between the indi­vid­ual that rec­og­nizes herself/himself and the col­lec­tion of faces from the soci­ety of spec­ta­cle that are sed­i­ment­ed in the neur­al net­work.

Martino/Stornaiuolo

So, the ques­tion remains: what is won through this project?


MegaPixels

https://megapixels.cc/datasets/

»MegaPix­els is an inde­pen­dent art and research project by Adam Har­vey and Jules LaPlace that inves­ti­gates the ethics, ori­gins, and indi­vid­ual pri­va­cy impli­ca­tions of face recog­ni­tion image datasets and their role in the expan­sion of bio­met­ric sur­veil­lance tech­nolo­gies.«

It’s worth to vis­it this site because it intro­duces, which data sets Machine Learn­ing relies on, and it also rais­es the ques­tion how researchers in this field can be called out for act­ing eth­i­cal­ly.

A short while lat­er, Adam Har­vey post­ed this on twit­ter, a good exam­ple of how edu­cat­ing the pub­lic can shake­up bad ethics:


How Youtube almost lost the battle

»Pedro Domin­gos, a pro­fes­sor of com­put­er sci­ence at the Uni­ver­si­ty of Wash­ing­ton, said that arti­fi­cial intel­li­gence is much less sophis­ti­cat­ed than many peo­ple believe, and Sil­i­con Val­ley com­pa­nies often por­tray their sys­tems as more pow­er­ful than they actu­al­ly are as they com­pete for busi­ness. In fact, even the most advanced arti­fi­cial intel­li­gence sys­tems still are fooled in ways that a human would eas­i­ly detect.«

https://www.washingtonpost.com/technology/2019/03/18/inside-youtubes-struggles-shut-down-video-new-zealand-shooting-humans-who-outsmarted-its-systems/?utm_term=.6d192ad26317


The wonderful world of false positives

Youtu­ber Unbox­Ther­a­py unlocks his phones’ face recog­ni­tion with anoth­er phone show­ing his face.


Crowdsourcing without Open Sourcing

»Because any­one can con­tribute to its plat­form, it gets updat­ed every day.« says the CEO. Noth­ing real­ly new from an AI start­up, despite mak­ing head­lines with MITs tech­nol­o­gy review: The com­pa­ny Map­il­lary crowd­sources com­mon knowl­edge to cap­i­tal­ize it by con­vert­ing it to valu­able data that is then cir­cu­lat­ed out of the hand of the com­mons, where it was orig­i­nal­ly sit­u­at­ed.

Map­il­lary uses crowd sourced imagery (that is with­out pay­ing for it) to cre­ate addi­tion­al data that would help autonomous cars to dri­ve »more save­ly«. While MIT Tech­nol­o­gy Review tries to describe the com­pa­ny as »Wikipedia of map­ping« it is clear­ly not. The com­pa­ny is pri­vate­ly owned and does­n’t give away the data in the sense of a pub­lic knowl­edge (e.g. donat­ing it to open street maps). Parts of the data is access­able via an API though and tem­porar­i­ly free »for char­i­ties and for edu­ca­tion­al or per­son­al use«.

The rather impu­dent mar­ket­ing is acknowl­edged at the arti­cles end, when stat­ing: »This sto­ry was cor­rect­ed to make clear the images are crowd­sourced but the under­ly­ing code is not open source.«

https://www.technologyreview.com/s/612825/open-source-maps-should-help-driverless-cars-navigate-our-cities-more-safely/

Why does adversarial.io tack­le this? The answer might be in an text by Eykholt et al.; Robust Phys­i­cal-World Attacks on Deep Learn­ing Mod­els. https://arxiv.org/abs/1707.08945


Deepfake – Video Detection

Press Relaease »SRI’s Spot­ting Audio-Visu­al Incon­sis­ten­cies (SAVI) tech­niques detect tam­pered videos by iden­ti­fy­ing dis­crep­an­cies between the audio and visu­al tracks. For exam­ple, the sys­tem can detect when lip syn­chro­niza­tion is a lit­tle off or if there is an unex­plained visu­al “jerk” in the video. Or it can flag a video as pos­si­bly tam­pered if the visu­al scene is out­doors, but analy­sis of the rever­ber­a­tion prop­er­ties of the audio track indi­cates the record­ing was done in a small room.

This video shows how the SAVI sys­tem detects speak­er incon­sis­ten­cies. First, the sys­tem detects the person’s face, tracks it through­out the video clip, and ver­i­fies it is the same per­son for the entire clip. It then detects when she is like­ly to be speak­ing by track­ing when she is mov­ing her mouth appro­pri­ate­ly. «


Testing Image Recognition

What hap­pens, if you put a card­box on. Good news is: While Google Vision rec­og­nizes me in oth­er images, it does not so wiht the “Hat” on.


Meanwhile on Twitter

Adam Har­vey: »Reminder that sim­ply wear­ing a base­ball cap and look­ing down at phone cre­ates dif­fi­cul­ties for facial recog­ni­tion sys­tems« com­ment­ing on this study:

»FIVE ran 36 pro­to­type algo­rithms from 16 com­mer­cial sup­pli­ers on 109 hours of video imagery tak­en at a vari­ety of set­tings. The video images includ­ed hard-to-match pic­tures of peo­ple look­ing at smart­phones, wear­ing hats or just look­ing away from the cam­era. Light­ing was some­times a prob­lem, and some faces nev­er appeared on the video because they were blocked, for exam­ple, by a tall per­son in front of them.

NIST used the algo­rithms to match faces from the video to data­bas­es pop­u­lat­ed with pho­tographs of up to 48,000 indi­vid­u­als. Peo­ple in the videos were not required to look in the direc­tion of the cam­era. With­out this require­ment, the tech­nol­o­gy must com­pen­sate for large changes in the appear­ance of a face and is often less suc­cess­ful. The report notes that even for the more accu­rate algo­rithms, sub­jects may be iden­ti­fied any­where from around 60 per­cent of the time to more than 99 per­cent, depend­ing on video or image qual­i­ty and the algorithm’s abil­i­ty to deal with the giv­en sce­nario.«

https://www.nist.gov/news-events/news/2017/04/identifying-faces-video-images-major-challenge-nist-report-shows


CCC 2018

The Chaos Com­mu­ni­ca­tion Con­gress 2018 deliv­ered plen­ty of ses­sions about pat­tern recog­ni­tion, deep “learn­ing” and AI. Espe­cial­ly the third talk, »Cir­cum­vent­ing video iden­ti­fi­ca­tion using aug­ment­ed real­i­ty« is rel­e­vant for adversarial.io.


Smartphone Face recognition tricked

Forbes Jour­nal­ist Thomas Brew­ster looked into stan­dard smart­phone face recog­ni­tion soft­ware and how it could detect fake 3‑D faces: »We test­ed four of the hottest hand­sets run­ning Google’s [Android] oper­at­ing sys­tems and Apple’s iPhone to see how easy it’d be to break into them. We did it with a 3D-print­ed head. All of the Androids opened with the fake. Apple’s phone, how­ev­er, was impen­e­tra­ble.«

https://www.forbes.com/sites/thomasbrewster/2018/12/13/we-broke-into-a-bunch-of-android-phones-with-a-3d-printed-head/



Meanwhile on Twitter


newer posts | older posts