Testing Image Recognition

What hap­pens, if you put a card­box on. Good news is: While Google Vision rec­og­nizes me in oth­er images, it does not so wiht the “Hat” on.

Meanwhile on Twitter

Adam Har­vey: »Reminder that sim­ply wear­ing a base­ball cap and look­ing down at phone cre­ates dif­fi­cul­ties for facial recog­ni­tion sys­tems« com­ment­ing on this study:

»FIVE ran 36 pro­to­type algo­rithms from 16 com­mer­cial sup­pli­ers on 109 hours of video imagery tak­en at a vari­ety of set­tings. The video images includ­ed hard-to-match pic­tures of peo­ple look­ing at smart­phones, wear­ing hats or just look­ing away from the cam­era. Light­ing was some­times a prob­lem, and some faces nev­er appeared on the video because they were blocked, for exam­ple, by a tall per­son in front of them.

NIST used the algo­rithms to match faces from the video to data­bas­es pop­u­lat­ed with pho­tographs of up to 48,000 indi­vid­u­als. Peo­ple in the videos were not required to look in the direc­tion of the cam­era. With­out this require­ment, the tech­nol­o­gy must com­pen­sate for large changes in the appear­ance of a face and is often less suc­cess­ful. The report notes that even for the more accu­rate algo­rithms, sub­jects may be iden­ti­fied any­where from around 60 per­cent of the time to more than 99 per­cent, depend­ing on video or image qual­i­ty and the algorithm’s abil­i­ty to deal with the giv­en scenario.«


CCC 2018

The Chaos Com­mu­ni­ca­tion Con­gress 2018 deliv­ered plen­ty of ses­sions about pat­tern recog­ni­tion, deep “learn­ing” and AI. Espe­cial­ly the third talk, »Cir­cum­vent­ing video iden­ti­fi­ca­tion using aug­ment­ed real­i­ty« is rel­e­vant for adversarial.io.

Smartphone Face recognition tricked

Forbes Jour­nal­ist Thomas Brew­ster looked into stan­dard smart­phone face recog­ni­tion soft­ware and how it could detect fake 3‑D faces: »We test­ed four of the hottest hand­sets run­ning Google’s [Android] oper­at­ing sys­tems and Apple’s iPhone to see how easy it’d be to break into them. We did it with a 3D-print­ed head. All of the Androids opened with the fake. Apple’s phone, how­ev­er, was impenetrable.«


Meanwhile on Twitter

NYT: How cheap labor drives China’s AI ambitions

Li Yuang: Two dozen young peo­ple go through pho­tos and videos, label­ing just about every­thing they see. That’s a car. That’s a traf­fic light. That’s bread, that’s milk, that’s choco­late. That’s what it looks like when a per­son walks.

“I used to think the machines are genius­es,” Ms. Hou, 24, said. “Now I know we’re the rea­son for their genius.”

Full New York Times arti­cle here

How to Hack artificial intelligence

this arti­cle and research is start­ing point for what now emerges as adversarial.io. It dis­cuss­es sev­er­al meth­ods of how to hack machine learn­ing sys­tems, neur­al net­works and arti­fi­cial intelligence.


Computer Vision’s Achilles’ heel

How the Ele­phant in the room get’s detect­ed as chair

A new study con­duct­ed by John Tsot­sos and Amir Rosen­feld (both York Uni­ver­si­ty, Toron­to) and Richard Zemel (Uni­ver­si­ty of Toron­to) shows again, what the major prob­lem of machine learn­ing is:

»In the study, the researchers pre­sent­ed a com­put­er vision sys­tem with a liv­ing room scene. The sys­tem processed it well. It cor­rect­ly iden­ti­fied a chair, a per­son, books on a shelf. Then the researchers intro­duced an anom­alous object into the scene — an image of an ele­phant. The elephant’s mere pres­ence caused the sys­tem to for­get itself: Sud­den­ly it start­ed call­ing a chair a couch and the ele­phant a chair, while turn­ing com­plete­ly blind to oth­er objects it had pre­vi­ous­ly seen.« (Kevin Hartnett)

Read the full sto­ry at Quantamagazine

The orig­i­nal study can be down­loaded from arxiv.org and is worth a read.

Google tries to find a solution to adversarial image detection problems, including adversarial noise

When the largest tech com­pa­ny can’t solve a prob­lem from its own ressources it calls to help pub­lic ressources. State fund­ed Uni­ver­si­ty researchers are among the most like­ly con­tes­tants to con­tribute solv­ing the “prob­lem” of “adver­sar­i­al noise”. Actu­al­ly we at adversarial.io don’t call it a prob­lem, but a fea­ture. Rather we find it prob­lem­at­ic that pub­lic ufund­ed com­put­er sci­en­tists want to con­tribute to the total detec­tion of imagery. It’s an eth­i­cal problem.

Any­way, it is reveal­ing with what area Google actu­al­ly strug­gles, call­ing for help to the sci­ence com­mu­ni­ty. Their call states, that » While pre­vi­ous research on adver­sar­i­al exam­ples has most­ly focused on inves­ti­gat­ing mis­takes caused by small mod­i­fi­ca­tions in order to devel­op improved mod­els, real-world adver­sar­i­al agents are often not sub­ject to the “small mod­i­fi­ca­tion” con­straint. Fur­ther­more, machine learn­ing algo­rithms can often make con­fi­dent errors when faced with an adver­sary, which makes the devel­op­ment of clas­si­fiers that don’t make any con­fi­dent mis­takes, even in the pres­ence of an adver­sary which can sub­mit arbi­trary inputs to try to fool the sys­tem, an impor­tant open problem.«

Adversarial.io is grate­ful to Google for point­ing out our own investe­ment strat­e­gy of exact­ly strength­en­ing the stealth mome­ments of imagery, so that humans can con­tin­ue to use imagery online with­out every­thing being detect­ed. If you want join forces with us, instead of join­ing Google, con­tact Adversarial.io via email.

newer posts