Model Description

Multi-stage Message Passing

In order to efficiently define GNN models, we propose a novel high-level abstraction called the Multi-Stage Message Passing graph (hereafter MSMP graph). Particularly, this abstraction mainly addresses the principles of simplicity and versatility. As such, it abstracts users from all the mathematical formulation behind GNNs and the programming obstacles imposed by traditional Deep Learning languages. Additionally, this abstraction also addresses the principle of reliability by providing a full picture of the message passing within GNNs, clearly identifying the different message passing phases and the relationships between the entities involved.

The MSMP graph abstraction provides an interface with a flexible modular design, providing support for any variant of state-of-the-art GNN architectures as well as custom combinations of individual components present in existing GNNs (e.g., messages, aggregations, updates, loss, normalization functions). In the networking field, GNN models are usually non-standard. They often need to be highly customized in order to adapt to complex network scenarios, usually addressing challenging modeling or optimization problems. Thus, proposed solutions typically require tailor-made GNN architectures including different element types in graphs (e.g., forwarding devices, links) and message-passing schemes divided in multiples phases sequentially arranged. In this context — and in line with the focus on network applications of IGNNITION — one main novelty of the proposed MSMP graph abstraction is that it provides support to define message passings divided in multiple stages and including different types of elements (also called entities). To the best of our knowledge, this enables to implement all the existing GNN architectures applied to networking to date.

General MSMP definition

Particularly, with the MSMP graph abstraction, a GNN design can be intuitively defined by a set of graph entities and how they relate to each other in a sequential order, which eventually describes a message-passing iteration of the GNN. Fig.~ref{fig:abstraction_graph} illustrates an example of a GNN with three different entity types (e1, e2 and e3). In this MSMP graph, we can observe two differentiated stages in the message passing. In the first stage, entities of type e1 and e2 send their hidden states to their neighbors of type e3 according to the connections of the input graph. Then, in the second stage, $e3$ entities send their states to the linked entities of type e1 and e2. This process is then repeated a number of iterations T to make the states converge to some fixed values.

Thus, IGNNITION supports any GNN that can be represented as an MSMP graph. This broadly includes the main state-of-the-art GNN architectures and all their variants, such as Graph Attention Networks , Graph Convolutional Networks, Gated Neural Networks, Graph LSTM, Typed Graph Networks, Hypergraph Neural Networks, and many others.

In order to further illustrate this abstraction, we shall focus on RouteNet, which is a representative GNN model applied to networking. RouteNet was proposed as a solution to efficiently model performance in networks. To do this, in the original paper the authors formulate a complex mathematical model with a hypergraph that includes two types of entities: (i) the links of the input topology, and (ii) the end-to-end paths formed by the routing configuration. However, with MSMP graphs RouteNet can be easily defined by a two-stage message passing scheme including the two entities involved and how they exchange their states (as shown in the figure below).

RouteNet MSMP definition

In particular, in this GNN each link first shares its state with all its related paths (i.e., the paths that traverse the link). Afterward, each path sends its state to its related links (i.e., the links that form the path). Note that in this two-stage message passing, the input graph of the GNN does not have a direct mapping to the network topology itself, but instead graph nodes are the different entities included in the MSMP graph (i.e., links and paths), and edges are the relationships between these elements. Thus, messages are not necessarily sent physically over the network. They are just logical objects that represent the exchange of hidden states between links and paths.

In the end, this abstraction enables to create a simple interface with users, which can easily define their own GNN models adapted to specific problems, just by defining the entities involved and their relationships. Lastly, IGNNITION produces an efficient implementation of the designed GNN in TensorFlow.

Generate your GNN

In order to define the architecture of the GNN we aim to create, the user is asked to define a model_description.yml file. This file will contain several sections that will define different aspects of our GNN. More specifically, the sections we must filled are:

Let us now go a little bit in more detail into each of these sections.

Step 1: Entity definition

When designing a GNN, we might find situations in which not all the nodes in the graph behave /represent the same object. For this, we might need to consider different behaviours depending on the type of node in question. For this, we shall refer to an entity as a type of node in the graph.

MSMP definition

From this example, we can observe that two entities must be created. Consequently, our model_description file must include a definition for each of them. Let us briefly describe how this can be done.

- entity: entity1
  state_dimension: 32
  initial_state:
  - type: build_state
    input: [feature1, feature2]

- entity: entity2
  state_dimension: 32
  initial_state:
  - type: build_state
    input: [feature3]

In the code from above, we can see that we simply have to create a list of two entities (this will depend on the problem). Then, for each of the entities we first indicate its name, which we will use throughout the rest of the definition of the GNN to refer to these type of nodes. Additionally, we provide the dimension of the states that each of these nodes will have. Finally, we must indicate how the initial state is computed. For this definition, we must provide a list of “operations” which increasingly define the the resulting initial state. For simplicity, in these example, we simply define an initial state with feature1 and feature2, and the rest of dimensions will be padded with 0s. Note that we do similarly with entity2.

Step 2: Message passing definition

At this point, we must define the core part of the GNN algorithm, which is the neural message-passing phase. In this phase, we define how the different nodes in the graph exchange messages with each other, in order to produce node-embeddings that properly consider the structural information of the graph.

For this, let us define some terminology that will help us to easily describe potentially very complex GNN.

What is a single message-passing?

The message-passing phase is the process of nodes from the graph sending messages to other nodes of the graph. Note, however, from the previous sections that in a complex setting, we might have numerous different types of nodes in the graph which we want to consider independently. For this, we must further generalize the idea of message-passing to make the appropriate considerations.

In this context, thus, we shall refer to a single message-passing to the process of the nodes that are of the source entity types (a,b,…, k) sending messages to a destination entity dest_entity.

In the most simple scenario, we might want to define a single message-passing as the process of nodes of type a sending messages to the nodes of type b. In other scenarios, however, entities a and b might be sending simultaniously messages to another entity’s nodes c.

How to define a single message-passing?

At this point, in order to illustrate this idea, let us suppose we are considering a single message-passing, such that nodes from entities a and b simultaniously send messages to the corresponding nodes of entity c. For this, we must define the following functions:

Message function

This message function is defined for each of the source entities to the given destination entity. The message function will define how the source nodes will form the message that they will send to their corresponding destination nodes. Below we provide a visualization for this process through an arbitrary graph of 3 different nodes.

MSMP definition
Aggregation function

Once we have defined the message function for each of the source entities (in this case, for the source entity a and for the entity b respectively), we need to define the aggregation function. The aggregation function defines how each of the destination nodes will take all the messages received from both entity a and b, and produce one single input. For this, IGNNITION, as seen before, allows a pipe-line of operations which incrementaly allow users to define potentially very complex strategys for this aggregation function. Below we show an illustration of this process, for simplicity, with an aggregation function consisting of a single operation which sums over all the messages into a single final input.

MSMP definition
Update function

Finally, we reach the point in which each of the destination nodes has produced an aggregated input of all the messages received. It just remains to create the corresponding update function of the destination entity that describes how it will use this information to update its current hidden state. Following the same squema used before, the illustration below exemplifies graphically this process.

MSMP definition

Using stages to define chronological orderings?

So far, we have talked about how we can create a single message-passing. One must note, however, that a complex GNN may contain many of this single message-passings. For this we need to be able to properly order them chronologically.

In order to simplify this ordering, we create what we called a stage. A stage simbolizes a given time-step of the algorithm. Then, to create our GNN, we can create several stages, and we can then assign single message-passings to a given stage.

To illustrate this, let us suppose we have created three single message-passings from the entities we have in the graph. Then, for instance, we might want to perform simultaniously the first two single message-passings, and once they are done, we execute the third one.

This can be done by creating two different stages. We then assign the first two single message-passings to the first stage (first time-step) and then the third single message-passing to the second stage (second time-step).

stages definition

Defining the message-passing phase

First of all, we must define the number of iterations (num_iterations). This indicates the number of times that all the given stages will perform all their single message-passings. Afterwards, we can proceed to define a list of stages. For sake of simplicity, let us only define one, as two define more, we must just include more elements in the list of stages.

To define a stage, the user must define all the stage_message_passings, these being all the single message-passings that must be executed during these time step (all of them simultaniously). Note that for each of them we define the three functions mentioned before (message function, aggregation function and update function). Visit keywords to get more information about the exact keywords that you can use in these sections.

message_passing:
    num_iterations: 8
    stages:
        stage_message_passings:
            destination_entity: c
            source_entities:
                - name: a
                  message:
                        type: direct_assignment
                - name: b
                  message:
                        type: direct_assignment
            aggregation:
                - type: sum

            update:
                type: recurrent_neural_network
                nn_name: recurrent1

Step 3: Readout definition

Once we have defined the message passing, it remains to define the readout. The readout function is the one in charge of taking some/all of the final states computed during the message-passing, and using them appropritly to predict the final label. For this, again, we allow full flexibility for this definition in the form of a pipe-line of operations (as seen before). For sake of simplicity, let’s suppose we aim to make a prediction over a global property of the graph. For this, we want to sum together all the final states of the nodes of type a, and then pass this to a neural network that computes the output_label. In this case, we would need to define two operations. One that sums all the states together, and another one that passes this output to the neural network. Below we show how this would be done.

readout:
- type: pooling
  type_pooling: sum
  input: [a]
  output_name: pooled_a
- type: feed_forward
  input: [pooled_a]
  nn_name: readout_model
  output_label: my_label

As you can see, we make use of the field output_name to define a name for the output of the first operation, which can then use as input for the second operation.

Step 4: Internal neural networks definition

Finally, it only remains to define the Neural Networks. Notice that in all the previous sections we have not explicitely defined the actual architecture of the neural network, but rather only referenced it by its name. In this section, we must indicate the actual architecture of each of them.

For instance, we show below how to create the readout_model Neural Network that we referenced in the readout. For this, we must define each of its layers.

neural_networks:
- nn_name: readout_model
  nn_architecture:
  - type_layer: Dense
    units: 256
    activation: sigmoid
  - type_layer: Dropout
    rate: 0.5
  - type_layer: Dense
    units: 1

In this example, we are linking the name readout_model to a neural network with three layers of type Dense, Dropout and another Dense. These definition is done through a list of layers (which can be arbitrarely long). An important consideration is that IGNNTION allows the use of all the layer types presented in keras library. Moreover, each of this layers can have numerous parameters that tune its properties. For this, again, we support all the parameters accepted by Keras for each layer respectively. This is done by simply adding them to the properties of each layers (e.g., the activation function in the first Dense layer). If a parameter is not defined (in case this is stated to be an optional parameter in the Keras Documentation), then IGNNITION will use the default parameter used by Keras.

Putting it into practice

So far, this section has covered in a very general way how to define a GNN. To fully get your hands on this topic, we recommend you to check our quick tutorial where we put all these concepts into practice to solve the specific problem of finding the shortest-path of a graph.