GNNs inherit stability from their layers
[[concept]]
Theorem
Let
(1) if
(stable to dilation/scaling)
(2) If
(stable to additive perturbations)
(3) If
(stable to relative perturbations)
And GNNs perform better than their constituent filters.
Proof
We begin with some non-restrictive additional assumptions:
- normalized input at all layers (easy to achieve with non-amplifying , ie ) activation function/nonlinearity is normalized Lipschitz, ie has a Lipschitz constant of 1.
Let
with
For each layer
We can apply the same reasoning to get a similar expression for