MENU

You are here

Robustness of Bayesian Neural Networks to Gradient-Based Attacks

Speaker: 
Ginevra Carbone
Institution: 
Units & SISSA
Schedule: 
Friday, July 10, 2020 - 14:00
Location: 
Online
Location: 
Zoom (sign in to get the link)
Abstract: 

Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications. Despite significant efforts, both practical and theoretical, the problem remains open. In this paper, we analyse the geometry of adversarial attacks in the large-data, overparametrized limit for Bayesian Neural Networks (BNNs). We show that, in the limit, vulnerability to gradient-based attacks arises as a result of degeneracy in the data distribution, i.e., when the data lies on a lower-dimensional submanifold of the ambient space. As a direct consequence, we demonstrate that in the limit BNN posteriors are robust to gradient-based adversarial attacks. Experimental results on the MNIST and Fashion MNIST datasets, representing the finite data regime, with BNNs trained with Hamiltonian Monte Carlo and Variational Inference support this line of argument, showing that BNNs can display both high accuracy and robustness to gradient based adversarial attacks.

Sign in