2025-03-26 lecture 15

[[lecture-data]]

2. [[GNN Analyses]]

Last time, we left off defining a metric for graphs with potentially different node counts.

Let Gn,Gm be graphs with n and m nodes respectively, with nm. We bring them into a common reference frame by comparing their {induced graphons} instead of attempting to compare the graphs directly.

induced graphon

Let Gn be a graph with n nodes. The graphon induced by Gn is
Wn(u,v)=i=1nj=1nAij1(uIi)1(vIj)
where 1 is the indicator function and Ik is defined as
Ik={[k1n,kn)1kn1[n1n,1]k=n

cut norm (kernels)

Let W be a kernel in [0,1]2. Its cut norm is defined as
||W||=supS,T[0,1]|S×TW(u,v)dudv|

Note

We want to use the induced graphons (recall we do this by dividing the interval [0,1] into n subintervals each corresponding to a node).

However, the cut norm does not take into account {ass||permutations of the intervals Ik||limitation} so {has||we cannot use this norm to define our metric directly||consequence}. To take these into account, we use {hha||measure-preserving bijections on the sample space/support for the graphon W (typically the unit interval) which we can think of as "permutations"||workaround}.

Cut Metric

The cut metric for two graphons (kernels) is given by
$\delta_{\square}(W,W') = \inf_{\varphi} \lvert \lvert W^{\varphi}-W' \rvert \rvert_{\square} $
Where |||| is the [[cut norm]] and Wφ(u,v)=W(φ(u),φ(v)) and φ are measure-preserving bijections (on the unit interval)

Thus, for two arbitrary graphs, we can define the cut distance:

Cut distance (arbitrary graphs)

Let Gn and Gm be two graphs with possibly different node counts n and m respectively. The cut distance is given by

δ(Gn,Gm)=δ(Wn,Wm)

Where Wn,Wm are the induced graphons of the graphs.

see [[cut distance]]

Note

We can show equivalence of right- and left- cut [[homomorphism density]] convergence with the [[cut distance]] for dense graphs.

Theorem

Sequences of graphs {Gn} converging to [[graphon]] W also converge in the [[cut norm]]:

GnW||WnW||0 as n

Where Wn is the [[induced graphon]] of Gn.

see [[graph sequence converges if and only if the induced graphon sequence converges]]

In fact, to show that left and right convergence with the cut metric/[[cut norm]] as above.

Proofs are in the book Large networks and convergent graph sequences by Lavàsz (online). For the equivalence in [[homomorphism density]], see section counting and inverse counting lemmas.

Summary

This answers the question that we had last time

Question

What happens when the graph grows?

  1. Can we measure how “close” two large graphs are? ie, can we define a metric for two large graphs?
Answer

The limiting object of a [[convergent graph sequence]] is a [[graphon]]. We use the (left) [[homomorphism density]] to show this convergence.

Answer

Now, we address

Question

2. Can we prove the GNN is continuous WRT to this metric?

Relationship between the graphons

(and [[graphon distances]] which have codomain [1,1])

Note

Inequality results comparing the L-p norms are for kernels that are {ass||more general than our graphons||comparison to graphon}. We want {has||the codomain to be [1,1]||modification} because {hha||taking the [[cut norm]] for a difference of graphons WW gives values in this interval.||modification reason}

Let W:[0,1]2[1,1].

Trivially:

||W||||W||1||W||2||W||1

Easy to see from definition of [[cut norm]] where we are essentially computing the L1-norm. Then we know already the rest of the inequalities.

Theorem

Let W:[0,1]2[1,1]. Then

||W||(1)||W||1(2)||W||2(3)||W||(4)1

And convergence Lp for p1 implies convergence in the [[cut norm]]

Proof

Easy to see (1) from the definition of the cut norm:

cut norm (kernels)

Let W be a kernel in [0,1]2. Its cut norm is defined as
||W||=supS,T[0,1]|S×TW(u,v)dudv|

This is computing the L-1 norm restricted to a subset of the nodes, so ||W||||W||1.

(2) and (3) are a common result from functional analysis, and (4) is because of the selected codomain for W.

Convergence in the [[cut norm]] follows immediately from this hierarchy.

see [[convergence in L-p implies convergence in cut norm]]

In the other direction:

||W||2,24||W||116||W||

[[cut norm]] convergence implies convergence in L2

Theorem

Let W:[0,1]2[1,1]. Then

||W||2,2(1)4||W||1(2)16||W||

ie, [[cut norm]] convergence implies convergence in L2

see [[Convergence in the cut norm implies convergence in L2]]

Proof of (2)

We first show inequality (2). Using the definition of the [[cut norm]], we have

||W||=supS,T[0,1]|S×TW(u,v)dudv|=supf,g:[0,1][0,1]|0101W(u,v)f(u)g(v)dudv|()=supf,g:[0,1][0,1]|TWf,g|

Since we are taking the supremum, it is equivalent to taking the supremum over all L functions f and g.

To get (), we notice that 01W(u,v)f(u)du is an [[integral linear operator]] with kernel W. We denote this with TW so that

TWf(v)=01W(u,v)f(u)du
  • or equivalently that TWf=01W(u,)f(u)du
    We can then see that the definition becomes the absolute value of the inner product of two L functions.

Note that ||W||1 is an induced [[operator norm]] of the [[graphon]] mapping functions from L to L1.

||W||1=sup1g1||TWg||1=sup1g1|TWg(x)|dx=sup1f,g1TWg,f
  • we omit the denominator in our usual definition of the [[operator norm]] since g is bounded by 1 and 1 and gL
  • We can rewrite this using f to match in the definition of the L1 norm since we are taking the supremum
  • For more about Lp spaces see wikipedia (hand wavy/cursory for this class on functional analysis details)

We can then rewrite this as

||W||1=sup1f,f1,1g,g1TW(gg),ff()sup1g,f1TWg,fsup1g,f1TWg,f+sup1g,f1TWg,fsup1g,f1TWg,f()4||W||

Where we get () by the [[triangle inequality]] and () from the definition of ||||.

  • Note that this is a loose bound!

And this gives us the desired result for (2).

Proof of (1)

Proving (1) is more involved and uses more functional analysis.

Use the Riesz-Thorin interpolation theorem for complex Lp spaces:

||W||pq||W||p1q01θ||W||p1q1θ

Where

  • θ=min(11p,1q) and
  • p0,q0[1,)
  • with 1p=1θp0 and
  • 11q=(1θ)(11q0) and
  • p1=,q1=1.

Define [[operator norm]]$$\lvert \lvert W \rvert \rvert_{\square, \mathbb{C}} = \sup_{\begin{aligned}
f,g:[0,1] &\to \mathbb{C} \\lvert \lvert f \rvert \rvert_{\infty}, \lvert \lvert g \rvert \rvert_{\infty} &≤1 \end{aligned}} \left\lvert \int {0}^1 \int^1 W(u,v) f(u) g(v) , du , dv \right\rvert $$
So for complex functions, we can see that

for complex functions ||W||1=||W||,C2||W||1 for real functions

We then have

||W||p0q0||W||1||W||1

Since W1, and this gives the desired result.

convergence in the 2 norm

idea/preview
If we have a sequence of convergent graphs that converges to a [[graphon]], then the [[graph shift operator]] (typically A) converges in an "operator sense" to the [[integral linear operator]] that we defined above TWf(v)=01W(u,v)f(u)du.

see [[graph shift operators converge to graphon shift operators]]

Takeaways

  • We defined [[convergent graph sequence]] converging to [[graphon]]
  • If two graphs belong to the same sequence, we know they are similar using the [[homomorphism density]]
  • Defined a metric with the cut metric to compute distances between graphs
  • If we have convergence in some LP spaces, we also have convergence in the [[cut norm]] and vice versa.
  • We can look at graphons as operators

can we use graphons to sample subgraphs?

Recall the question we had

Question

What if the graph is too large and I don’t have enough resources to train on it? (recall GNN forward pass is O(LK|E|) where L is the number of layers and K is the order of the filter polynomial)

we can think of graphons as generative models as well

Example

Graphon as a limiting object for some sequence of graphs
20250324-2025-03-24-graphs-2.png
Graphon as generating object for finite graphs
20250326-graphs.png

options for sampling nodes and edges from graphons

see [[ways to sample graphs from graphons]]

template graph

A template graph Gn is a way to sample a graph from a graphon. The nodes are defined by partitioning [0,1] in a grid (regular partition) in the same way we partitioned for induced graphons:

Ik={[k1n,kn[1kn1[n1n,1]k=n

So that I1I2In=[0,1] and the node labels are uj=j1n for each j.

The [[adjacency matrix]] is then given as

[An]ij=W(ui,uj)

Where W is the [[graphon]] that we sample from.

This is a complete, weighted graph with edge weights coming from the [[graphon]] evaluated at each node pair (ui,uj)[0,1]2.

This is the simplest way to sample a graph. We can think of it as the graph sampling counterpart to inducing a graphon.

see [[template graph]]

weighted graphs

Weighted graphs are a way to sample a graph from a graphon. Each node uj is sampled uniformly at random from the unit interval.

The edges are then defined in the same way as a [[template graph]]

[An]ij=W(ui,uj)

see [[weighted graph (sample)]]

Def

Fully random graphs are a way to sample a graph from a graphon. Like a [[weighted graph (sample)]], the nodes are sampled uniformly from the unit interval ( uiU[0,1] ).

The edges are sampled as

[An]ij=[An]jiBernoulli(W(ui,uj))

So the resulting graph is undirected and unweighted .

see [[fully random graph]]

Note

Generally we are interested in the fully random graphs.

All of the graphs above converge to W in some sense.

Template graphs: trivial

Theorem

Let {Gn} be a sequence of template graphs sampled from [[graphon]] W. Then GnW as n.

Proof

Trivial

  • The definition of template graphs is fully deterministic, so there is no randomness here. Since this is the reverse idea of thinking of graphons as graphs with uncountable number of nodes, it is easy to see that the induced graphons Wn converge to W in an L2 way and [[convergence in L-p implies convergence in cut norm]]

see [[template graphs converge to the graphon]]

Theorem

Let {Gn} be a sequence of weighted graphs sampled from [[graphon]] W. With probability at least 1exp(n2logn),

δ(Gn,W)20logn

Where δ is the cut metric. ie, GnW in probability.

see [[weighted sampled graphs converge to the graphon in probability]]

random

Theorem

Let {Gn} be a sequence of "fully random" graphs sampled from [[graphon]] W. With probability at least 1exp(n2logn),

δ(Gn,W)22logn

Where δ is the cut metric. ie, GnW in probability.

see [[fully random graphs converge to the graphon in probability]]

Note

We'll look at the proofs for these later.

Tip

If a graph is too big to train on it, we can look at the [[induced graphon]] and subsample new graphs from it to train on instead.


GNN continuity

Question

How do we determine whether a GNN continuous with respect to the cut metric?

In order to transfer from small graphs to large graphs, we need continuity. In particular, we need Lipschitz continuity with a not-very-large constant.

Question

Do GNNs converge?

This question has a long answer. To start to answer it, we need to introduce graphon signals and graphon signal processing.

road map

  • generalize up from graphs to graphons
  • graph convolution to [[graphon convolution]]
  • intuition behind why GNNs generalize well with scale
  • then look at bounds
    • finite sampling bounds
    • minimum sample sizes
    • error bounds

Note

we focus a lot on information processing pipelines and GNNs, but there are also many other applications for graphons. Many people are interested in graphon estimation / the underlying distribution for some graphs.

Graphon signal

A graphon signal is defined as a function

X:[0,1]R
Note

contrast this with [[graph signals]], which we defined as xRn.

Note

We focus on signals in L2, or "finite energy signals"XL2([0,1]):

01|X(u)|2duB<

see [[graphon signal]]

Like a [[graphon]], a [[graphon signal]] is the limit of a [[convergent sequence of graph signals]]. Like when defining our [[cut distance]] for differently sized graphs, the [[graph signals]] may have different sizes since the dimension changes with the number of nodes n. Similarly, we solve this with [[induced graphon signal]]

Induced Graphon Signal

Let (Gn,Xn) be a graph signal. The induced graphon signal is defined as the pair (Wn,Xn) where Wn is the [[induced graphon]] and

Xn(u)=i=1n[xn]i1(uIi)

Where 1 is the indicator function and

Ik={[k1n,kn)1kn1[n1n,1]k=n
Example

20250324-induced-graphon.png
Both representations of 3-node graphs ([[induced graphon]] above and induced graphon signal below)
20250326-graph.png

see [[induced graphon signal]]


Review

#flashcards/math/dsg

const { dateTime } = await cJS()

return function View() {
	const file = dc.useCurrentFile();
	return <p class="dv-modified">Created {dateTime.getCreated(file)}     ֍     Last Modified {dateTime.getLastMod(file)}</p>
}