NLDL2022 Tutorial:"The Information Bottleneck Approaches in Deep Neural Networks" by Shujian Yu(UiT)

NLDL2022 Tutorial:"The Information Bottleneck Approaches in Deep Neural Networks" by Shujian Yu(UiT)

The goal of machine learning is to use data to obtain simple algorithms for predicting a random variable Y from a corre- lated observation X. Since the dimension of X is typically huge, computationally feasible solutions should summarize it into a lower-dimensional feature vector T, from which Y is predicted. A notable learning principle to achieve this goal is called the In- formation Bottleneck. This tutorial introduces the general idea behind information bottleneck and discusses its several vari- ants. We will then introduce the neural network parameteriza- tion of the IB principle and discuss its applications in problems involving neural network interpretability, domain generaliza- tion and adaptation, adversarial robustness and graph neural networks. Shujian Yu (UiT) leads this last tutorial (10th Jan. 2022).