Meanwhile on Twitter


NYT: How cheap labor drives China’s AI ambitions

Li Yuang: Two dozen young peo­ple go through pho­tos and videos, label­ing just about every­thing they see. That’s a car. That’s a traf­fic light. That’s bread, that’s milk, that’s choco­late. That’s what it looks like when a per­son walks.

“I used to think the machines are genius­es,” Ms. Hou, 24, said. “Now I know we’re the rea­son for their genius.”

Full New York Times arti­cle here



How to Hack artificial intelligence

this arti­cle and research is start­ing point for what now emerges as adversarial.io. It dis­cuss­es sev­er­al meth­ods of how to hack machine learn­ing sys­tems, neur­al net­works and arti­fi­cial intelligence.

https://databasecultures.irmielin.org/how-to-hack-artificial-intelligence/

Computer Vision’s Achilles’ heel

How the Ele­phant in the room get’s detect­ed as chair

A new study con­duct­ed by John Tsot­sos and Amir Rosen­feld (both York Uni­ver­si­ty, Toron­to) and Richard Zemel (Uni­ver­si­ty of Toron­to) shows again, what the major prob­lem of machine learn­ing is:

»In the study, the researchers pre­sent­ed a com­put­er vision sys­tem with a liv­ing room scene. The sys­tem processed it well. It cor­rect­ly iden­ti­fied a chair, a per­son, books on a shelf. Then the researchers intro­duced an anom­alous object into the scene — an image of an ele­phant. The elephant’s mere pres­ence caused the sys­tem to for­get itself: Sud­den­ly it start­ed call­ing a chair a couch and the ele­phant a chair, while turn­ing com­plete­ly blind to oth­er objects it had pre­vi­ous­ly seen.« (Kevin Hartnett)

Read the full sto­ry at Quantamagazine

The orig­i­nal study can be down­loaded from arxiv.org and is worth a read.


Google tries to find a solution to adversarial image detection problems, including adversarial noise

When the largest tech com­pa­ny can’t solve a prob­lem from its own ressources it calls to help pub­lic ressources. State fund­ed Uni­ver­si­ty researchers are among the most like­ly con­tes­tants to con­tribute solv­ing the “prob­lem” of “adver­sar­i­al noise”. Actu­al­ly we at adversarial.io don’t call it a prob­lem, but a fea­ture. Rather we find it prob­lem­at­ic that pub­lic ufund­ed com­put­er sci­en­tists want to con­tribute to the total detec­tion of imagery. It’s an eth­i­cal problem.

Any­way, it is reveal­ing with what area Google actu­al­ly strug­gles, call­ing for help to the sci­ence com­mu­ni­ty. Their call states, that » While pre­vi­ous research on adver­sar­i­al exam­ples has most­ly focused on inves­ti­gat­ing mis­takes caused by small mod­i­fi­ca­tions in order to devel­op improved mod­els, real-world adver­sar­i­al agents are often not sub­ject to the “small mod­i­fi­ca­tion” con­straint. Fur­ther­more, machine learn­ing algo­rithms can often make con­fi­dent errors when faced with an adver­sary, which makes the devel­op­ment of clas­si­fiers that don’t make any con­fi­dent mis­takes, even in the pres­ence of an adver­sary which can sub­mit arbi­trary inputs to try to fool the sys­tem, an impor­tant open problem.«

Adversarial.io is grate­ful to Google for point­ing out our own investe­ment strat­e­gy of exact­ly strength­en­ing the stealth mome­ments of imagery, so that humans can con­tin­ue to use imagery online with­out every­thing being detect­ed. If you want join forces with us, instead of join­ing Google, con­tact Adversarial.io via email.


newer posts