Speakers: Beatrice Bevilacqua, PhD at Purdue University (USA), Fabrizio Frasca, PhD at Imperial College (UK)
Where: Zoom
When: November 11th, 12:00 - 13:00

Whilst considered the leading architectures for Deep Learning on graph-structured data, Message Passing Neural Networks (MPNNs) are intrinsically bounded in their expressive power by the WL heuristic for graph isomorphism. This limitation has recently ignited prolific research aiming at breaking this fundamental bottleneck. Yet, the most prominent approaches either suffer from high computational complexity and weak inductive biases or require some form of domain knowledge for their effective application. In this talk β€” which we structure in two parts β€” we argue that these aforementioned limitations can be mitigated by resorting to a principled modelling of subgraphs.

We observe that graphs not distinguishable by MPNNs often contain distinguishable subgraphs. In the first part of the talk we build upon this intuition to design a novel framework dubbed Equivariant Subgraph Aggregation Networks1 (ESAN), prescribing to represent a graph as a bag of subgraphs returned by a pre-specified policy and to process it with a suitable equivariant architecture. We show that this approach effectively increases the expressive power of both MPNNs and more expressive architectures, and study the impact of its design choices.

Interestingly, we notice a surge in concurrent approaches which – sometimes unwittingly – make use of subgraphs for powerful graph representations. In the second part of the talk we focus on the most prominent form of these methods: those where subgraphs are directly β€œgenerated” by nodes in the original graph (node-based methods). A novel symmetry analysis allows to unify and better characterise this class of approaches: we prove an upper-bound on their expressive power and conceive a framework serving as a design space for equivariant node-based subgraph architectures. Finally, we introduce a novel instantiation of this framework: a new method dubbed SUN,2 which captures previous architectures while providing better empirical performance on multiple benchmarks.

References