[[lecture-data]] 2. GNN Analyses
Last time, we left off defining a metric for graphs with potentially different node counts.
Let be graphs with and nodes respectively, with . We bring them into a common reference frame by comparing their {==induced graphons==} instead of attempting to compare the graphs directly.
induced graphon
Let be a graph with nodes. The graphon induced by is W_{n}(u,v) = \sum_{i=1}^n \sum_{j=1}^n A_{ij} \mathbb{1}(u \in I_{i})\cdot \mathbb{1}(v \in I_{j})$$ where \mathbb{1}I_{k}I_{k} = \begin{cases} [ \frac{k-1}{n}, \frac{k}{n} );;;1 \leq k \leq n-1 \ \left[ \frac{n-1}{n}, 1 \right] ;;; k=n \end{cases}$$
cut norm (kernels)
Let be a kernel in . Its cut norm is defined as $\lvert \lvert W \rvert \rvert_{\square} = \sup_{S,T \subseteq [0,1]} \left|\int \int _{S\times T} W(u,v), du , dv \right|$$
NOTE
We want to use the kernel cut norm to be able to compute a distance between the two induced graphons (recall we do this by dividing the interval into subintervals each corresponding to a node).
However, the cut norm does not take into account {==ass||permutations of the intervals ||limitation} so {has||we cannot use this norm to define our metric directly||consequence}. To take these into account, we use {hha||measure-preserving bijections on the sample space/support for the graphon (typically the unit interval) which we can think of as “permutations”||workaround==}.
Cut Metric
The cut metric for two graphons (kernels) is given by \delta_{\square}(W,W') = \inf_{\varphi} \lvert \lvert W^{\varphi}-W' \rvert \rvert_{\square} $$ Where \lvert \lvert \cdot \rvert \rvert_{\square}W^{\varphi}(u,v) = W(\varphi(u), \varphi(v))\varphi$ are measure-preserving bijections (on the unit interval)
Thus, for two arbitrary graphs, we can define the cut distance:
Cut distance (arbitrary graphs)
Let and be two graphs with possibly different node counts and respectively. The cut distance is given by Where are the induced graphons of the graphs.
see cut distance
NOTE
We can show equivalence of right- and left- cut homomorphism density convergence with the cut distance for dense graphs.
Theorem
Sequences of graphs converging to graphon also converge in the cut norm: Where is the induced graphon of .
see graph sequence converges if and only if the induced graphon sequence converges
In fact, to show that left and right convergence with the homomorphism density are equivalent is via convergence wrt the cut metric/cut norm as above.
Proofs are in the book Large networks and convergent graph sequences by Lavàsz (online). For the equivalence in homomorphism density, see section counting and inverse counting lemmas.
Summary
This answers the question that we had last time
Question
What happens when the graph grows?
- Can we measure how “close” two large graphs are? ie, can we define a metric for two large graphs?
convergent graph sequence is a graphon. We use the (left) homomorphism density to show this convergence.
The limiting object of a
homomorphism density is difficult to use as a distance. Instead, we define the cut distance to compare arbitrary graphs through their induced graphons.
The
Now, we address
Question
- Can we prove the GNN is continuous WRT to this metric?
Relationship between the cut norm and L-p norm for graphons
(and graphon distances which have codomain )
Note
Inequality results comparing the cut norm and various L-p norms are for kernels that are {==ass||more general than our graphons||comparison to graphon}. We want {has||the codomain to be ||modification} because {hha||taking the cut norm for a difference of graphons gives values in this interval.||modification reason==}
Let .
Trivially: Easy to see from definition of cut norm where we are essentially computing the -norm. Then we know already the rest of the inequalities.
- convergence in implies cut norm convergence
Theorem
Let . Then And convergence for implies convergence in the cut norm
Proof
Easy to see (1) from the definition of the cut norm:
cut norm (kernels) be a kernel in . Its cut norm is defined as $\lvert \lvert W \rvert \rvert_{\square} = \sup_{S,T \subseteq [0,1]} \left|\int \int _{S\times T} W(u,v), du , dv \right|$$
Let
This is computing the L-1 norm restricted to a subset of the nodes, so .
(2) and (3) are a common result from functional analysis, and (4) is because of the selected codomain for .
Convergence in the cut norm follows immediately from this hierarchy.
see convergence in L-p implies convergence in cut norm
In the other direction: cut norm convergence implies convergence in
Theorem
Let . Then ie, cut norm convergence implies convergence in
see Convergence in the cut norm implies convergence in L2
Proof of (2)
We first show inequality (2). Using the definition of the cut norm, we have
\lvert \lvert W \rvert \rvert_{\square} &= \sup_{S,T \subseteq [0,1]} \left\lvert \int \int _{S \times T} W(u,v) \, du \, dv \right\rvert \\ &= \sup_{f,g: [0,1]\to[0,1]} \left\lvert \int_{0}^1 \int_{0}^1 W(u,v) f(u) g(v) \, du \, dv \right\rvert \\ (*) &= \sup_{f,g: [0,1] \to [0,1]}\lvert \langle T_{W}f, g \rangle \rvert \\ \end{aligned}$$ Since we are taking the supremum, it is equivalent to taking the supremum over all $L_{\infty}$ functions $f$ and $g$. To get $(*)$, we notice that $\int _{0}^1 W(u,v) f(u) \, du$ is an [[integral linear operator]] with kernel $W$. We denote this with $T_{W}$ so that $$T_{W}f\,\,(v) =\int _{0}^1 W(u,v) f(u) \, du$$ - or equivalently that $T_{W}f = \int _{0}^1 W(u,\cdot) f(u) \, du$ We can then see that the definition becomes the absolute value of the inner product of two $L_{\infty}$ functions. Note that $\lvert \lvert W \rvert \rvert_{\infty \to 1}$ is an induced [[operator norm]] of the [[graphon]] mapping functions from $L_{\infty}$ to $L_{1}$. $$\begin{aligned} \lvert \lvert W \rvert \rvert_{\infty \to 1} &= \sup_{-1 \leq g \leq 1} \lvert \lvert T_{W}g \rvert \rvert_{1} \\ &= \sup_{-1 \leq g \leq 1} \int \lvert T_{W}g(x) \rvert \, dx \\ &= \sup_{-1 ≤f,g≤1} \langle T_{W}g, f \rangle \\ \end{aligned}$$ - we omit the denominator in our usual definition of the [[operator norm]] since $g$ is bounded by $-1$ and $1$ and $g \in L_{\infty}$ - We can rewrite this using $f$ to match in the definition of the $L_{1}$ norm since we are taking the supremum - For more about $L_{p}$ spaces see [wikipedia](https://en.wikipedia.org/wiki/Lp_space#Lp_spaces_and_Lebesgue_integrals) (hand wavy/cursory for this class on functional analysis details) We can then rewrite this as $$\begin{aligned}\lvert \lvert W \rvert \rvert_{\infty \to 1} &= \sup_{-1 \leq f, f' \leq 1, -1 \leq g, g' \leq 1} \langle T_{W}(g-g'), f-f' \rangle \\ (**) &\leq \sup_{-1 \leq g, f \leq 1} \langle T_{W}g, f \rangle - \sup_{-1 \leq g,f' \leq 1} \langle T_{W}g, f' \rangle + \sup_{-1 \leq g',f\leq 1} \langle T_{W}g', f \rangle - \sup_{-1 ≤g', f' ≤1} \langle T_{W}g', f' \rangle \\ ({\dagger}) &\leq 4 \lvert \lvert W \rvert \rvert_{\square} \end{aligned}$$ Where we get $(**)$ by the [[triangle inequality]] and $({\dagger})$ from the definition of $\lvert \lvert \cdot \rvert \rvert_{\square}$. - Note that this is a loose bound! And this gives us the desired result for (2).
Proof of (1)
Proving (1) is more involved and uses more functional analysis.
Use the Riesz-Thorin interpolation theorem for complex spaces: Where
- and
- with and
- and
- .
Define operator norm So for complex functions, we can see that
\text{for complex functions } \lvert \lvert W \rvert \rvert_{\infty \to 1} &= \lvert \lvert W \rvert \rvert_{\square, \mathbb{C}} \\ &≤ 2\lvert \lvert W \rvert \rvert_{\infty \to 1} \text{ for real functions} \end{aligned}$$ We then have $$\lvert \lvert W \rvert \rvert _{p_{0} \to q_{0}} \leq \lvert \lvert W \rvert \rvert _{1 \to \infty} \leq \lvert \lvert W \rvert \rvert_{\infty} \leq 1$$ Since $W \leq 1$, and this gives the desired result.
convergence in the cut norm iff convergence in the 2 norm
idea/preview If we have a sequence of convergent graphs that converges to a graphon, then the graph shift operator (typically ) converges in an “operator sense” to the integral linear operator that we defined above .
see graph shift operators converge to graphon shift operators
Takeaways
- We defined convergent graph sequence converging to graphon
- If two graphs belong to the same sequence, we know they are similar using the homomorphism density
- Defined a metric with the cut metric to compute distances between graphs
- If we have convergence in some spaces, we also have convergence in the cut norm and vice versa.
- We can look at graphons as operators
can we use graphons to sample subgraphs?
Recall the question we had
Question
What if the graph is too large and I don’t have enough resources to train on it? (recall GNN forward pass is where is the number of layers and is the order of the filter polynomial)
we can think of graphons as generative models as well
Example
Graphon as a limiting object for some sequence of graphs
Graphon as generating object for finite graphs
options for sampling nodes and edges from graphons
see ways to sample graphs from graphons
template graph
A template graph is a way to sample a graph from a graphon. The nodes are defined by partitioning in a grid (regular partition) in the same way we partitioned for induced graphons:
[ \frac{k-1}{n}, \frac{k}{n} [\;\;\;1 \leq k \leq n-1 \\ \left[ \frac{n-1}{n}, 1 \right] \;\;\; k=n \end{cases}$$ So that $I_{1} \cup I_{2} \cup\dots \cup I_{n} = [0,1]$ and the node labels are $u_{j}=\frac{j-1}{n}$ for each $j$. The [[adjacency matrix]] is then given as $$[A_{n}]_{ij} = W(u_{i},u_{j})$$ Where $W$ is the [[graphon]] that we sample from. This is a complete, [[unweighted graph|weighted graph]] with edge weights coming from the [[graphon]] evaluated at each node pair $(u_{i},u_{j}) \in [0,1]^2$. This is the simplest way to sample a graph. We can think of it as the graph sampling counterpart to inducing a graphon.
see template graph
weighted graphs
Weighted graphs are a way to sample a graph from a graphon. Each node is sampled uniformly at random from the unit interval.
The edges are then defined in the same way as a template graph
Def
Fully random graphs are a way to sample a graph from a graphon. Like a weighted graph (sample), the nodes are sampled uniformly from the unit interval ( ).
The edges are sampled as So the resulting graph is undirected and unweighted .
NOTE
Generally we are interested in the fully random graphs.
- [?] could we sample the nodes randomly, then sample the edge weights from the 2D interval?
All of the graphs above converge to in some sense.
Template graphs: trivial
Theorem
Let be a sequence of template graphs sampled from graphon . Then as .
Proof
Trivial
- The definition of template graphs is fully deterministic, so there is no randomness here. Since this is the reverse idea of thinking of graphons as graphs with uncountable number of nodes, it is easy to see that the induced graphons converge to in an way and convergence in L-p implies convergence in cut norm
see template graphs converge to the graphon
Theorem
Let be a sequence of weighted graphs sampled from graphon . With probability at least , Where is the cut metric. ie, in probability.
see weighted sampled graphs converge to the graphon in probability
random
Theorem
Let be a sequence of “fully random” graphs sampled from graphon . With probability at least , Where is the cut metric. ie, in probability.
see fully random graphs converge to the graphon in probability
NOTE
We’ll look at the proofs for these later.
Tip
If a graph is too big to train on it, we can look at the induced graphon and subsample new graphs from it to train on instead.
GNN continuity
Question
How do we determine whether a GNN continuous with respect to the cut metric?
In order to transfer from small graphs to large graphs, we need continuity. In particular, we need Lipschitz continuity with a not-very-large constant.
Question
Do GNNs converge?
This question has a long answer. To start to answer it, we need to introduce graphon signals and graphon signal processing.
- These are extensions of Graph Signals and Graph Signal Processing
road map
- generalize up from graphs to graphons
- graph convolution to graphon convolution
- intuition behind why GNNs generalize well with scale
- then look at bounds
finite sampling bounds
minimum sample sizes
error bounds
NOTE
we focus a lot on information processing pipelines and GNNs, but there are also many other applications for graphons. Many people are interested in graphon estimation / the underlying distribution for some graphs.
Graphon signal
A graphon signal is defined as a function
NOTE
contrast this with graph signals, which we defined as .
NOTE
We focus on signals in , or “finite energy signals”:
see graphon signal
Like a graphon, a graphon signal is the limit of a convergent sequence of graph signals. Like when defining our cut distance for differently sized graphs, the graph signals may have different sizes since the dimension changes with the number of nodes . Similarly, we solve this with induced graphon signal
Induced Graphon Signal
Let be a graph signal. The induced graphon signal is defined as the pair where is the induced graphon and Where is the indicator function and
[ \frac{k-1}{n}, \frac{k}{n} )\;\;\;1 \leq k \leq n-1 \\ \left[ \frac{n-1}{n}, 1 \right] \;\;\; k=n \end{cases}$$
Example
Both representations of 3-node graphs (induced graphon above and induced graphon signal below)
Review
const { dateTime } = await cJS()
return function View() {
const file = dc.useCurrentFile();
return <p class="dv-modified">Created {dateTime.getCreated(file)} ֍ Last Modified {dateTime.getLastMod(file)}</p>
}
Graphon as generating object for finite graphs

Both representations of 3-node graphs (