
Can stable and accurate neural networks be computed? – On the barriers of deep learning and Smale's 18th problem
Deep learning (DL) has had unprecedented success and is now entering sci...
read it

What do AI algorithms actually learn?  On false structures in deep learning
There are two big unsolved mathematical questions in artificial intellig...
read it

Resilience of Bayesian LayerWise Explanations under Adversarial Attacks
We consider the problem of the stability of saliencybased explanations ...
read it

Connecting Lyapunov Control Theory to Adversarial Attacks
Significant work is being done to develop the math and tools necessary t...
read it

NonNegative Networks Against Adversarial Attacks
Adversarial attacks against Neural Networks are a problem of considerabl...
read it

Universal Approximation with Certified Networks
Training neural networks to be certifiably robust is a powerful defense ...
read it
The mathematics of adversarial attacks in AI – Why deep learning is unstable despite the existence of stable neural networks
The unprecedented success of deep learning (DL) makes it unchallenged when it comes to classification problems. However, it is well established that the current DL methodology produces universally unstable neural networks (NNs). The instability problem has caused an enormous research effort – with a vast literature on socalled adversarial attacks – yet there has been no solution to the problem. Our paper addresses why there has been no solution to the problem, as we prove the following mathematical paradox: any training procedure based on training neural networks for classification problems with a fixed architecture will yield neural networks that are either inaccurate or unstable (if accurate) – despite the provable existence of both accurate and stable neural networks for the same classification problems. The key is that the stable and accurate neural networks must have variable dimensions depending on the input, in particular, variable dimensions is a necessary condition for stability. Our result points towards the paradox that accurate and stable neural networks exist, however, modern algorithms do not compute them. This yields the question: if the existence of neural networks with desirable properties can be proven, can one also find algorithms that compute them? There are cases in mathematics where provable existence implies computability, but will this be the case for neural networks? The contrary is true, as we demonstrate how neural networks can provably exist as approximate minimisers to standard optimisation problems with standard cost functions, however, no randomised algorithm can compute them with probability better than 1/2.
READ FULL TEXT
Comments
There are no comments yet.