This is a library containing pyTorch code for creating graph neural network (GNN) models. The library provides some sample implementations.

If you are interested in using this library, please read about its architecture and how to define GNN models or follow this tutorial.

Note that ptgnn takes care of defining the whole pipeline, including data wrangling tasks, such as data loading and tensorization. It also defines PyTorch nn.Modules for the neural network operations. These are independent of the AbstractNeuralModels and can be used as all other PyTorch's nn.Modules, if one wishes to do so.

The library is mainly engineered to be fast for sparse graphs. For example, for the Graph2Class task (discussed below) on a V100 with the default hyperparameters and architecture ptgnn can process about 82 graphs/sec (209k nodes/sec and 1,129k edges/sec) during training and about 200 graph/sec (470k nodes/sec and 2,527k edges/sec) during testing.

#### Implemented Tasks

All task implementations can be found in the ptgnn.implementations package. Detailed instructions on the data and the training steps can be found here. We welcome external contributions. The following GNN-based tasks are implemented:

**PPI**The PPI task as described by Zitnik and Leskovec, 2017.**VarMisuse**This is a re-implementation of the VariableMisuse task of Allamanis*et al.*, 2018.**Graph2Sequence**This is re-implementation of the GNN->GRU model of Fernandes*et. al.*, 2019.**Graph2Class**Classify (Label) a subset of the input nodes into classes similar to Graph2Class in Typilus.

The tutorial gives a step-by-step example for coding the Graph2Class model.

github地址：https://github.com/microsoft/ptgnn?u=1402400261&m=4511460153172156&cu=1968044071