PremiUm-CNN: Propagating Uncertainty Towards Robust Convolutional Neural Networks
Dimah Dera, Nidhal Buaynaya, Ghulam Rasool, Roman Shterenberg, Hassan Fathallah-Shaykh
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:16:25
Deep neural networks (DNNs) have surpassed human-level accuracy in various learning tasks. However, unlike humans who have a natural cognitive intuition for probabilities, DNNs cannot express their uncertainty in the output decisions. Bayesian inference provides a principled approach to reasoning about the model's uncertainty by estimating the posterior distribution of the unknown parameters. The challenge in DNNs remains the multi-layer stages of non-linearities, which make the propagation of high-dimensional distributions mathematically intractable. This paper establishes the theoretical and algorithmic foundations of uncertainty or belief propagation by developing new deep learning models named PremiUm-CNNs. We introduce a tensor normal distribution as a prior over convolutional kernels and estimate the variational posterior. We start by deriving the first-order mean-covariance propagation framework. Later, we develop a framework based on the unscented transformation that propagates sigma points of the variational distribution through layers of a CNN. The propagated covariance of the predictive distribution captures uncertainty in the output decision. Comprehensive experiments conducted on diverse benchmark datasets demonstrate: 1) superior robustness against noise and adversarial attacks, 2) self-assessment through predictive uncertainty that increases quickly with increasing levels of noise or attacks, and 3) an ability to detect a targeted attack from ambient noise.