
https://medium.com/@kcimc/how-to-recognize-fake-ai-generated-images-4d1f6f9a2842
Li Yuang: Two dozen young people go through photos and videos, labeling just about everything they see. That’s a car. That’s a traffic light. That’s bread, that’s milk, that’s chocolate. That’s what it looks like when a person walks.
“I used to think the machines are geniuses,” Ms. Hou, 24, said. “Now I know we’re the reason for their genius.”
this article and research is starting point for what now emerges as adversarial.io. It discusses several methods of how to hack machine learning systems, neural networks and artificial intelligence.
A new study conducted by John Tsotsos and Amir Rosenfeld (both York University, Toronto) and Richard Zemel (University of Toronto) shows again, what the major problem of machine learning is:
»In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene — an image of an elephant. The elephant’s mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen.« (Kevin Hartnett)
Read the full story at Quantamagazine
The original study can be downloaded from arxiv.org and is worth a read.
When the largest tech company can’t solve a problem from its own ressources it calls to help public ressources. State funded University researchers are among the most likely contestants to contribute solving the “problem” of “adversarial noise”. Actually we at adversarial.io don’t call it a problem, but a feature. Rather we find it problematic that public ufunded computer scientists want to contribute to the total detection of imagery. It’s an ethical problem.
Anyway, it is revealing with what area Google actually struggles, calling for help to the science community. Their call states, that » While previous research on adversarial examples has mostly focused on investigating mistakes caused by small modifications in order to develop improved models, real-world adversarial agents are often not subject to the “small modification” constraint. Furthermore, machine learning algorithms can often make confident errors when faced with an adversary, which makes the development of classifiers that don’t make any confident mistakes, even in the presence of an adversary which can submit arbitrary inputs to try to fool the system, an important open problem.«
Adversarial.io is grateful to Google for pointing out our own investement strategy of exactly strengthening the stealth momements of imagery, so that humans can continue to use imagery online without everything being detected. If you want join forces with us, instead of joining Google, contact Adversarial.io via email.