Graph Neural Networks

Data
GNN

A "full-fledged" GNN layer is a multi-dimensional generalization of the multi-layer graph perceptron given by

x=σ(U)=σ(k=0K1Skx1H,k)

H,kRd1×d for 1L, where

Here, XRn×d is still called an -layer embedding.

Note

These GNNs are sometimes called convolutional GNNs because they are based on graph convolution.

GNN layer equation

X=σ(U)=σ(k=0K1SkX1H,k)

^layer-equation

Mentions

File
2025-04-26 SIAM Soledad
manifold convergence for GNNs
GINs are maximally powerful for anonymous input graphs
GNNs inherit stability from their layers
GNNs perform better than their constituent filters
Lipschitz filters are stable to additive perturbations
MPNNs can be expressed as graph convolutions
We can verify whether graphs without node features and different laplacian eigenvalues are not isomorphic
cycle homomorphism density is given by the trace of the adjacency matrix
filter permutation invariance
fully connected readout layer
graph automorphism
graph convolutional network
graph convolutions of bandlimited signals converge to the graphon convolution
graph isomorphism
injective GNNs are as powerful as the WL test
integral lipschitz filters are stable to dilations
perturbations on a graph shift operator
readout layer
the WL test is at least as powerful as a GNN for detecting graph non-isomorphism
we can use GNNs to solve feature-aware semi-supervised learning problems
2025-02-05 graphs lecture 5
2025-02-12 graphs lecture 7
2025-02-17 graphs lecture 8
2025-02-19 graphs lecture 9
2025-02-24 graphs lecture 10
2025-03-03 graphs lecture 11
2025-03-05 graphs lecture 12
2025-03-10 graphs lecture 13
2025-03-24 graphs lecture 14
2025-03-26 lecture 15
2025-04-09 lecture 19
2025-04-14 lecture 20
2025-04-16 lecture 21
2025-02-18 equivariant lecture 3
2025-02-25 equivariant lecture 4
2025-03-04 equivariant lecture 6
Improved Image Classification with Manifold Neural Networks