If you were to ask any AI engineer about their main frustration with current artificial intelligence
technologies, their answer would be that they lack ‘robustness’.
By robustness, we mean that AI systems are actually not very flexible. Image and voice recognition
systems are a great example of this. These systems are rigorously trained to recognize certain images or
sounds, however, while very accurate, they are unable to deal with images or sounds that have been
manipulated or altered in some way.
So, for example, a voice system such as Alexa or Siri might be excellent in understanding a voice
command from a person speaking in a normal way, but can be inaccurate when that very same person
gives the very same voice commands when they have a cold, etc.
Another example is the frustration that people with speaking disabilities have, such as those with cleft
pallets, etc, when trying to use chat-bots that have not been trained to interpret such sound variations on
spoken words.
A group of Japanese AI scientists from Kyushu University have devised an interesting solution that
shows promise to being able to finally fix the problem of inflexible AIs. Amazingly, this solution
resolves around not directly trying to improve these AI systems, but rather by deliberately breaking
them.

Inflexibility Equals Inaccuracy

Were any AI company able to produce an AI system that was 100% accurate at the range of tasks that it
has been trained to accomplish then there would no company that could say no to using it.
The simple fact of the matter is that while current AI systems have achieved a stunning level of
accuracy for specific tasks (the top AI-assisted moviemaking platforms have achieved an accuracy of
86%+ for some tasks), they still remain inflexible when it comes to non-structured data analysis.
The reason for this is simple, analyzing unstructured data such as images is a very complex task which
took us animals many, many millions of years to master. Even us human, who have a brain that is
already biologically developed to quickly learn this task, still take many years to master it. We all
remember as children playing a shape fall game and trying to find the right toy shape to fit into the
right shaped hole, for example.
Artificial intelligence systems are still in their infancy, and as we pointed out earlier, are excellent at
undertaking specific tasks, however, lack the flexibility to deal with data that does not fit their existing
training.
As a result, when this does happen, as perhaps might occur in an image recognition system that is
trying to identify faces from a poor quality or corrupted video recording, the system is simply unable to
adapt. The result is poor quality results, and quite often, no results at all.

Breaking AIs to Improve Them

In their paper titled “Breaking AIs to make them better: Researchers investigate ways to make AIs
more robust by studying patterns in their answers when faced with the unknown.” a group of visionary
data scientists might have found a solution to this problem. Source: ScienceDaily.
www.sciencedaily.com/releases/2022/06/220630114533.htm
These AI experts proposed a method called ‘Raw Zero-Shot’, which is a technique that allows AI
engineers to accurate assess how AI neural networks analyze ‘unknown elements’. The approach
involves giving an AI system a varied data set, say for example a range of images or sounds that the
system has not been trained to recognize. The system is asked to interpret them and the results are
analyzed.
Naturally, the results are all wrong, however, by understanding how they are wrong the engineers are
able to gain a good understanding of how the system is extrapolating the results. Indeed, what the
researchers found was that the wrong results tended to be clustered, something which gave a strong
insight into how the AI system reached those conclusions.
Danilo Vasconcellos Vargas, who led the study, stated that “if you give an image to an AI, it will try to
tell you what it is, no matter if that answer is correct or not. So, we took the twelve most common AIs
today and applied a new method called ‘Raw Zero-Shot Learning. Basically, we gave the AIs a series of
images with no hints or training. Our hypothesis was that there would be correlations in how they
answered. They would be wrong, but wrong in the same way.”
The insights provided by this approach not only allow for AI engineers to better understand the ‘why’
behind the systems that they are developing but also to gauge their limitations more accurately. Such
insights will allow them to formulate better training approaches as well as ensure that their training is
more comprehensive.
However, more fundamentally, this breakthrough highlights the need to refocus the fundamentals of AI
training away from results to adaptability.
Project leader Vargas goes on to point out the implications of their breakthrough, stating that “instead
of focusing solely on accuracy, we must investigate ways to improve robustness and flexibility. Then
we may be able to develop a true artificial intelligence.”