Handling Black Swan Events in Deep Learning with Diversely Extrapolated Neural Networks

Handling Black Swan Events in Deep Learning with Diversely Extrapolated Neural Networks

Maxime Wabartha, Audrey Durand, Vincent Fran├žois-Lavet, Joelle Pineau

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 2140-2147. https://doi.org/10.24963/ijcai.2020/296

By virtue of their expressive power, neural networks (NNs) are well suited to fitting large, complex datasets, yet they are also known to produce similar predictions for points outside the training distribution. As such, they are, like humans, under the influence of the Black Swan theory: models tend to be extremely "surprised" by rare events, leading to potentially disastrous consequences, while justifying these same events in hindsight. To avoid this pitfall, we introduce DENN, an ensemble approach building a set of Diversely Extrapolated Neural Networks that fits the training data and is able to generalize more diversely when extrapolating to novel data points. This leads DENN to output highly uncertain predictions for unexpected inputs. We achieve this by adding a diversity term in the loss function used to train the model, computed at specific inputs. We first illustrate the usefulness of the method on a low-dimensional regression problem. Then, we show how the loss can be adapted to tackle anomaly detection during classification, as well as safe imitation learning problems.
Keywords:
Machine Learning: Deep Learning
Machine Learning: Ensemble Methods
Uncertainty in AI: Uncertainty Representations