
What happens, if you put a cardbox on. Good news is: While Google Vision recognizes me in other images, it does not so wiht the “Hat” on.
What happens, if you put a cardbox on. Good news is: While Google Vision recognizes me in other images, it does not so wiht the “Hat” on.
Adam Harvey: »Reminder that simply wearing a baseball cap and looking down at phone creates difficulties for facial recognition systems« commenting on this study:
»FIVE ran 36 prototype algorithms from 16 commercial suppliers on 109 hours of video imagery taken at a variety of settings. The video images included hard-to-match pictures of people looking at smartphones, wearing hats or just looking away from the camera. Lighting was sometimes a problem, and some faces never appeared on the video because they were blocked, for example, by a tall person in front of them.
NIST used the algorithms to match faces from the video to databases populated with photographs of up to 48,000 individuals. People in the videos were not required to look in the direction of the camera. Without this requirement, the technology must compensate for large changes in the appearance of a face and is often less successful. The report notes that even for the more accurate algorithms, subjects may be identified anywhere from around 60 percent of the time to more than 99 percent, depending on video or image quality and the algorithm’s ability to deal with the given scenario.«
The Chaos Communication Congress 2018 delivered plenty of sessions about pattern recognition, deep “learning” and AI. Especially the third talk, »Circumventing video identification using augmented reality« is relevant for adversarial.io.
Forbes Journalist Thomas Brewster looked into standard smartphone face recognition software and how it could detect fake 3‑D faces: »We tested four of the hottest handsets running Google’s [Android] operating systems and Apple’s iPhone to see how easy it’d be to break into them. We did it with a 3D-printed head. All of the Androids opened with the fake. Apple’s phone, however, was impenetrable.«
Li Yuang: Two dozen young people go through photos and videos, labeling just about everything they see. That’s a car. That’s a traffic light. That’s bread, that’s milk, that’s chocolate. That’s what it looks like when a person walks.
“I used to think the machines are geniuses,” Ms. Hou, 24, said. “Now I know we’re the reason for their genius.”
this article and research is starting point for what now emerges as adversarial.io. It discusses several methods of how to hack machine learning systems, neural networks and artificial intelligence.
A new study conducted by John Tsotsos and Amir Rosenfeld (both York University, Toronto) and Richard Zemel (University of Toronto) shows again, what the major problem of machine learning is:
»In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene — an image of an elephant. The elephant’s mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen.« (Kevin Hartnett)
Read the full story at Quantamagazine
The original study can be downloaded from arxiv.org and is worth a read.
When the largest tech company can’t solve a problem from its own ressources it calls to help public ressources. State funded University researchers are among the most likely contestants to contribute solving the “problem” of “adversarial noise”. Actually we at adversarial.io don’t call it a problem, but a feature. Rather we find it problematic that public ufunded computer scientists want to contribute to the total detection of imagery. It’s an ethical problem.
Anyway, it is revealing with what area Google actually struggles, calling for help to the science community. Their call states, that » While previous research on adversarial examples has mostly focused on investigating mistakes caused by small modifications in order to develop improved models, real-world adversarial agents are often not subject to the “small modification” constraint. Furthermore, machine learning algorithms can often make confident errors when faced with an adversary, which makes the development of classifiers that don’t make any confident mistakes, even in the presence of an adversary which can submit arbitrary inputs to try to fool the system, an important open problem.«
Adversarial.io is grateful to Google for pointing out our own investement strategy of exactly strengthening the stealth momements of imagery, so that humans can continue to use imagery online without everything being detected. If you want join forces with us, instead of joining Google, contact Adversarial.io via email.