Short Course: Graph Neural Networks (Part 3 of 4)
Alejandro Ribeiro, Charilaos I. Kanatsoulis, Navid NaderiAlizadeh, Alejandro Parada-Mayorga, Luana Ruiz
Graph Neural Networks (GNNs) have emerged as the tool of choice for machine learning on graphs and are rapidly growing as the next deep learning frontier. Indeed, as the 2010’s were the time of convolutional neural networks (CNNs) applied to learning with images and time signals, the 2020’s are shaping up to be the time of GNNs for learning on graphs. This is the right time for practitioners and researchers to have the opportunity to learn about GNNs and their use in machine learning on graphs. In this course, we present GNNs as generalizations of CNNs based on the generalization of convolutions in time and space to convolutions on graphs. The main focus of the course is to teach students how to formulate and solve machine learning problems with GNNs. We place emphasis on showing how the use of a convolutional architecture enables scalability to high-dimensional problems. We also explore three fundamental properties of GNNs: Equivariance to label permutations, stability to deformations, and transferability across dimensions. This course is modeled on a regular course offered at the University of Pennsylvania (https://gnn.seas.upenn.edu).