models¶
BaseModel¶
-
class
cogdl.models.base_model.
BaseModel
[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(*args)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
Supervised Model¶
-
class
cogdl.models.supervised_model.
SupervisedHeterogeneousNodeClassificationModel
[source]¶
Embedding Model¶
-
class
cogdl.models.emb.hope.
HOPE
(dimension, beta)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The HOPE model from the “Grarep: Asymmetric transitivity preserving graph embedding” paper.
- Args:
- hidden_size (int) : The dimension of node representation. beta (float) : Parameter in katz decomposition.
-
model_name
= 'hope'¶
-
class
cogdl.models.emb.spectral.
Spectral
(dimension)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The Spectral clustering model from the “Leveraging social media networks for classification” paper
- Args:
- hidden_size (int) : The dimension of node representation.
-
model_name
= 'spectral'¶
-
train
(G)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation - mode (
False
). Default:True
.
- mode (bool): whether to set training mode (
- Returns:
- Module: self
-
class
cogdl.models.emb.hin2vec.
Hin2vec
(hidden_dim, walk_length, walk_num, batch_size, hop, negative, epochs, lr, cpu=True)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The Hin2vec model from the “HIN2Vec: Explore Meta-paths in Heterogeneous Information Networks for Representation Learning” paper.
- Args:
- hidden_size (int) : The dimension of node representation. walk_length (int) : The walk length. walk_num (int) : The number of walks to sample for each node. batch_size (int) : The batch size of training in Hin2vec. hop (int) : The number of hop to construct training samples in Hin2vec. negative (int) : The number of nagative samples for each meta2path pair. epochs (int) : The number of training iteration. lr (float) : The initial learning rate of SGD. cpu (bool) : Use CPU or GPU to train hin2vec.
-
model_name
= 'hin2vec'¶
-
train
(G, node_type)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation - mode (
False
). Default:True
.
- mode (bool): whether to set training mode (
- Returns:
- Module: self
-
class
cogdl.models.emb.netmf.
NetMF
(dimension, window_size, rank, negative, is_large=False)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The NetMF model from the “Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vec” paper.
- Args:
- hidden_size (int) : The dimension of node representation. window_size (int) : The actual context size which is considered in language model. rank (int) : The rank in approximate normalized laplacian. negative (int) : The number of nagative samples in negative sampling. is-large (bool) : When window size is large, use approximated deepwalk matrix to decompose.
-
model_name
= 'netmf'¶
-
train
(G)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation - mode (
False
). Default:True
.
- mode (bool): whether to set training mode (
- Returns:
- Module: self
-
class
cogdl.models.emb.distmult.
DistMult
(nentity, nrelation, hidden_dim, gamma, double_entity_embedding=False, double_relation_embedding=False)[source]¶ Bases:
cogdl.models.emb.knowledge_base.KGEModel
The DistMult model from the ICLR 2015 paper “EMBEDDING ENTITIES AND RELATIONS FOR LEARNING AND INFERENCE IN KNOWLEDGE BASES” <https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/ICLR2015_updated.pdf> borrowed from KnowledgeGraphEmbedding<https://github.com/DeepGraphLearning/KnowledgeGraphEmbedding>
-
model_name
= 'distmult'¶
-
-
class
cogdl.models.emb.transe.
TransE
(nentity, nrelation, hidden_dim, gamma, double_entity_embedding=False, double_relation_embedding=False)[source]¶ Bases:
cogdl.models.emb.knowledge_base.KGEModel
The TransE model from paper “Translating Embeddings for Modeling Multi-relational Data” <http://papers.nips.cc/paper/5071-translating-embeddings-for-modeling-multi-relational-data.pdf> borrowed from KnowledgeGraphEmbedding<https://github.com/DeepGraphLearning/KnowledgeGraphEmbedding>
-
model_name
= 'transe'¶
-
-
class
cogdl.models.emb.deepwalk.
DeepWalk
(dimension, walk_length, walk_num, window_size, worker, iteration)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The DeepWalk model from the “DeepWalk: Online Learning of Social Representations” paper
- Args:
- hidden_size (int) : The dimension of node representation. walk_length (int) : The walk length. walk_num (int) : The number of walks to sample for each node. window_size (int) : The actual context size which is considered in language model. worker (int) : The number of workers for word2vec. iteration (int) : The number of training iteration in word2vec.
-
static
add_args
(parser: argparse.ArgumentParser)[source]¶ Add model-specific arguments to the parser.
-
classmethod
build_model_from_args
(args) → cogdl.models.emb.deepwalk.DeepWalk[source]¶ Build a new model instance.
-
model_name
= 'deepwalk'¶
-
train
(G: networkx.classes.graph.Graph, embedding_model_creator=<class 'gensim.models.word2vec.Word2Vec'>)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation - mode (
False
). Default:True
.
- mode (bool): whether to set training mode (
- Returns:
- Module: self
-
class
cogdl.models.emb.rotate.
RotatE
(nentity, nrelation, hidden_dim, gamma, double_entity_embedding=False, double_relation_embedding=False)[source]¶ Bases:
cogdl.models.emb.knowledge_base.KGEModel
Implementation of RotatE model from the paper “RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space” <https://openreview.net/forum?id=HkgEQnRqYQ>. borrowed from KnowledgeGraphEmbedding<https://github.com/DeepGraphLearning/KnowledgeGraphEmbedding>
-
model_name
= 'rotate'¶
-
-
class
cogdl.models.emb.gatne.
GATNE
(dimension, walk_length, walk_num, window_size, worker, epoch, batch_size, edge_dim, att_dim, negative_samples, neighbor_samples, schema)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The GATNE model from the “Representation Learning for Attributed Multiplex Heterogeneous Network” paper
- Args:
- walk_length (int) : The walk length. walk_num (int) : The number of walks to sample for each node. window_size (int) : The actual context size which is considered in language model. worker (int) : The number of workers for word2vec. epoch (int) : The number of training epochs. batch_size (int) : The size of each training batch. edge_dim (int) : Number of edge embedding dimensions. att_dim (int) : Number of attention dimensions. negative_samples (int) : Negative samples for optimization. neighbor_samples (int) : Neighbor samples for aggregation schema (str) : The metapath schema used in model. Metapaths are splited with “,”, while each node type are connected with “-” in each metapath. For example:”0-1-0,0-1-2-1-0”
-
model_name
= 'gatne'¶
-
train
(network_data)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation - mode (
False
). Default:True
.
- mode (bool): whether to set training mode (
- Returns:
- Module: self
-
class
cogdl.models.emb.dgk.
DeepGraphKernel
(hidden_dim, min_count, window_size, sampling_rate, rounds, epoch, alpha, n_workers=4)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The Hin2vec model from the “Deep Graph Kernels” paper.
- Args:
- hidden_size (int) : The dimension of node representation. min_count (int) : Parameter in word2vec. window (int) : The actual context size which is considered in language model. sampling_rate (float) : Parameter in word2vec. iteration (int) : The number of iteration in WL method. epoch (int) : The number of training iteration. alpha (float) : The learning rate of word2vec.
-
forward
(graphs, **kwargs)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'dgk'¶
-
class
cogdl.models.emb.grarep.
GraRep
(dimension, step)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The GraRep model from the “Grarep: Learning graph representations with global structural information” paper.
- Args:
- hidden_size (int) : The dimension of node representation. step (int) : The maximum order of transitition probability.
-
model_name
= 'grarep'¶
-
train
(G)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation - mode (
False
). Default:True
.
- mode (bool): whether to set training mode (
- Returns:
- Module: self
-
class
cogdl.models.emb.dngr.
DNGR
(hidden_size1, hidden_size2, noise, alpha, step, max_epoch, lr, cpu)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The DNGR model from the “Deep Neural Networks for Learning Graph Representations” paper
- Args:
- hidden_size1 (int) : The size of the first hidden layer. hidden_size2 (int) : The size of the second hidden layer. noise (float) : Denoise rate of DAE. alpha (float) : Parameter in DNGR. step (int) : The max step in random surfing. max_epoch (int) : The max epoches in training step. lr (float) : Learning rate in DNGR.
-
model_name
= 'dngr'¶
-
train
(G)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation - mode (
False
). Default:True
.
- mode (bool): whether to set training mode (
- Returns:
- Module: self
-
class
cogdl.models.emb.pronepp.
ProNEPP
(filter_types, svd, search, max_evals=None, loss_type=None, n_workers=None)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
model_name
= 'prone++'¶
-
-
class
cogdl.models.emb.graph2vec.
Graph2Vec
(dimension, min_count, window_size, dm, sampling_rate, rounds, epoch, lr, worker=4)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The Graph2Vec model from the “graph2vec: Learning Distributed Representations of Graphs” paper
- Args:
- hidden_size (int) : The dimension of node representation. min_count (int) : Parameter in doc2vec. window_size (int) : The actual context size which is considered in language model. sampling_rate (float) : Parameter in doc2vec. dm (int) : Parameter in doc2vec. iteration (int) : The number of iteration in WL method. epoch (int) : The max epoches in training step. lr (float) : Learning rate in doc2vec.
-
forward
(graphs, **kwargs)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'graph2vec'¶
-
class
cogdl.models.emb.metapath2vec.
Metapath2vec
(dimension, walk_length, walk_num, window_size, worker, iteration, schema)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The Metapath2vec model from the “metapath2vec: Scalable Representation Learning for Heterogeneous Networks” paper
- Args:
- hidden_size (int) : The dimension of node representation. walk_length (int) : The walk length. walk_num (int) : The number of walks to sample for each node. window_size (int) : The actual context size which is considered in language model. worker (int) : The number of workers for word2vec. iteration (int) : The number of training iteration in word2vec. schema (str) : The metapath schema used in model. Metapaths are splited with “,”, while each node type are connected with “-” in each metapath. For example:”0-1-0,0-2-0,1-0-2-0-1”.
-
model_name
= 'metapath2vec'¶
-
train
(G, node_type)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation - mode (
False
). Default:True
.
- mode (bool): whether to set training mode (
- Returns:
- Module: self
-
class
cogdl.models.emb.node2vec.
Node2vec
(dimension, walk_length, walk_num, window_size, worker, iteration, p, q)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The node2vec model from the “node2vec: Scalable feature learning for networks” paper
- Args:
- hidden_size (int) : The dimension of node representation. walk_length (int) : The walk length. walk_num (int) : The number of walks to sample for each node. window_size (int) : The actual context size which is considered in language model. worker (int) : The number of workers for word2vec. iteration (int) : The number of training iteration in word2vec. p (float) : Parameter in node2vec. q (float) : Parameter in node2vec.
-
model_name
= 'node2vec'¶
-
train
(G)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation - mode (
False
). Default:True
.
- mode (bool): whether to set training mode (
- Returns:
- Module: self
-
class
cogdl.models.emb.complex.
ComplEx
(nentity, nrelation, hidden_dim, gamma, double_entity_embedding=False, double_relation_embedding=False)[source]¶ Bases:
cogdl.models.emb.knowledge_base.KGEModel
the implementation of ComplEx model from the paper “Complex Embeddings for Simple Link Prediction”<http://proceedings.mlr.press/v48/trouillon16.pdf> borrowed from KnowledgeGraphEmbedding<https://github.com/DeepGraphLearning/KnowledgeGraphEmbedding>
-
model_name
= 'complex'¶
-
-
class
cogdl.models.emb.pte.
PTE
(dimension, walk_length, walk_num, negative, batch_size, alpha)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The PTE model from the “PTE: Predictive Text Embedding through Large-scale Heterogeneous Text Networks” paper.
- Args:
- hidden_size (int) : The dimension of node representation. walk_length (int) : The walk length. walk_num (int) : The number of walks to sample for each node. negative (int) : The number of nagative samples for each edge. batch_size (int) : The batch size of training in PTE. alpha (float) : The initial learning rate of SGD.
-
model_name
= 'pte'¶
-
train
(G, node_type)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation - mode (
False
). Default:True
.
- mode (bool): whether to set training mode (
- Returns:
- Module: self
-
class
cogdl.models.emb.netsmf.
NetSMF
(dimension, window_size, negative, num_round, worker)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The NetSMF model from the “NetSMF: Large-Scale Network Embedding as Sparse Matrix Factorization” paper.
- Args:
- hidden_size (int) : The dimension of node representation. window_size (int) : The actual context size which is considered in language model. negative (int) : The number of nagative samples in negative sampling. num_round (int) : The number of round in NetSMF. worker (int) : The number of workers for NetSMF.
-
model_name
= 'netsmf'¶
-
train
(G)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation - mode (
False
). Default:True
.
- mode (bool): whether to set training mode (
- Returns:
- Module: self
-
class
cogdl.models.emb.line.
LINE
(dimension, walk_length, walk_num, negative, batch_size, alpha, order)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The LINE model from the “Line: Large-scale information network embedding” paper.
- Args:
- hidden_size (int) : The dimension of node representation. walk_length (int) : The walk length. walk_num (int) : The number of walks to sample for each node. negative (int) : The number of nagative samples for each edge. batch_size (int) : The batch size of training in LINE. alpha (float) : The initial learning rate of SGD. order (int) : 1 represents perserving 1-st order proximity, 2 represents 2-nd, while 3 means both of them (each of them having dimension/2 node representation).
-
model_name
= 'line'¶
-
train
(G)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation - mode (
False
). Default:True
.
- mode (bool): whether to set training mode (
- Returns:
- Module: self
-
class
cogdl.models.emb.sdne.
SDNE
(hidden_size1, hidden_size2, droput, alpha, beta, nu1, nu2, max_epoch, lr, cpu)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The SDNE model from the “Structural Deep Network Embedding” paper
- Args:
- hidden_size1 (int) : The size of the first hidden layer. hidden_size2 (int) : The size of the second hidden layer. droput (float) : Droput rate. alpha (float) : Trade-off parameter between 1-st and 2-nd order objective function in SDNE. beta (float) : Parameter of 2-nd order objective function in SDNE. nu1 (float) : Parameter of l1 normlization in SDNE. nu2 (float) : Parameter of l2 normlization in SDNE. max_epoch (int) : The max epoches in training step. lr (float) : Learning rate in SDNE. cpu (bool) : Use CPU or GPU to train hin2vec.
-
model_name
= 'sdne'¶
-
train
(G)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation - mode (
False
). Default:True
.
- mode (bool): whether to set training mode (
- Returns:
- Module: self
-
class
cogdl.models.emb.prone.
ProNE
(dimension, step, mu, theta)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The ProNE model from the “ProNE: Fast and Scalable Network Representation Learning” paper.
- Args:
- hidden_size (int) : The dimension of node representation. step (int) : The number of items in the chebyshev expansion. mu (float) : Parameter in ProNE. theta (float) : Parameter in ProNE.
-
model_name
= 'prone'¶
-
train
(G)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation - mode (
False
). Default:True
.
- mode (bool): whether to set training mode (
- Returns:
- Module: self
GNN Model¶
-
class
cogdl.models.nn.dgi.
DGIModel
(in_feats, hidden_size, activation)[source]¶ Bases:
cogdl.models.self_supervised_model.SelfSupervisedContrastiveModel
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'dgi'¶
-
-
class
cogdl.models.nn.mvgrl.
MVGRL
(in_feats, hidden_size, sample_size=2000, batch_size=4, alpha=0.2, dataset='cora')[source]¶ Bases:
cogdl.models.self_supervised_model.SelfSupervisedContrastiveModel
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'mvgrl'¶
-
-
class
cogdl.models.nn.patchy_san.
PatchySAN
(batch_size, num_features, num_classes, num_sample, stride, num_neighbor, iteration)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The Patchy-SAN model from the “Learning Convolutional Neural Networks for Graphs” paper.
- Args:
- batch_size (int) : The batch size of training. sample (int) : Number of chosen vertexes. stride (int) : Node selection stride. neighbor (int) : The number of neighbor for each node. iteration (int) : The number of training iteration.
-
forward
(batch)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'patchy_san'¶
-
class
cogdl.models.nn.pyg_cheb.
Chebyshev
(in_feats, hidden_size, out_feats, num_layers, dropout, filter_size)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'chebyshev'¶
-
-
class
cogdl.models.nn.gcn.
TKipfGCN
(in_feats, hidden_size, out_feats, num_layers, dropout, activation='relu', residual=False, norm=None, actnn=False)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The GCN model from the “Semi-Supervised Classification with Graph Convolutional Networks” paper
- Args:
- in_features (int) : Number of input features. out_features (int) : Number of classes. hidden_size (int) : The dimension of node representation. dropout (float) : Dropout rate for model training.
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'gcn'¶
-
class
cogdl.models.nn.gdc_gcn.
GDC_GCN
(nfeat, nhid, nclass, dropout, alpha, t, k, eps, gdctype)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The GDC model from the “Diffusion Improves Graph Learning” paper, with the PPR and heat matrix variants combined with GCN
- Args:
- num_features (int) : Number of input features in ppr-preprocessed dataset. num_classes (int) : Number of classes. hidden_size (int) : The dimension of node representation. dropout (float) : Dropout rate for model training. alpha (float) : PPR polynomial filter param, 0 to 1. t (float) : Heat polynomial filter param k (int) : Top k nodes retained during sparsification. eps (float) : Threshold for clipping. gdc_type (str) : “none”, “ppr”, “heat”
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'gdc_gcn'¶
-
class
cogdl.models.nn.pyg_hgpsl.
HGPSL
(num_features, num_classes, hidden_size, dropout, pooling, sample_neighbor, sparse_attention, structure_learning, lamb)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(data)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'hgpsl'¶
-
-
class
cogdl.models.nn.graphsage.
Graphsage
(num_features, num_classes, hidden_size, num_layers, sample_size, dropout, aggr)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(*args)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'graphsage'¶
-
-
class
cogdl.models.nn.compgcn.
LinkPredictCompGCN
(num_entities, num_rels, hidden_size, num_bases=0, layers=1, sampling_rate=0.01, score_func='conve', penalty=0.001, dropout=0.0, lbl_smooth=0.1, opn='sub')[source]¶ Bases:
cogdl.utils.link_prediction_utils.GNNLinkPredict
,cogdl.models.base_model.BaseModel
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'compgcn'¶
-
-
class
cogdl.models.nn.drgcn.
DrGCN
(num_features, num_classes, hidden_size, num_layers, dropout, norm=None, activation='relu')[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'drgcn'¶
-
-
class
cogdl.models.nn.pyg_gpt_gnn.
GPT_GNN
[source]¶ Bases:
cogdl.models.supervised_model.SupervisedHomogeneousNodeClassificationModel
,cogdl.models.supervised_model.SupervisedHeterogeneousNodeClassificationModel
-
static
get_trainer
(args) → Optional[Type[Union[cogdl.trainers.gpt_gnn_trainer.GPT_GNNHomogeneousTrainer, cogdl.trainers.gpt_gnn_trainer.GPT_GNNHeterogeneousTrainer]]][source]¶
-
model_name
= 'gpt_gnn'¶
-
static
-
class
cogdl.models.nn.pyg_graph_unet.
GraphUnet
(in_feats: int, hidden_size: int, out_feats: int, pooling_layer: int, pooling_rates: List[float], n_dropout: float = 0.5, adj_dropout: float = 0.3, activation: str = 'elu', improved: bool = False, aug_adj: bool = False)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(graph: cogdl.data.data.Graph) → torch.Tensor[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'unet'¶
-
-
class
cogdl.models.nn.gcnmix.
GCNMix
(in_feat, hidden_size, num_classes, k, temperature, alpha, rampup_starts, rampup_ends, final_consistency_weight, ema_decay, dropout)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'gcnmix'¶
-
-
class
cogdl.models.nn.diffpool.
DiffPool
(in_feats, hidden_dim, embed_dim, num_classes, num_layers, num_pool_layers, assign_dim, pooling_ratio, batch_size, dropout=0.5, no_link_pred=True, concat=False, use_bn=False)[source]¶ Bases:
cogdl.models.base_model.BaseModel
DIFFPOOL from paper Hierarchical Graph Representation Learning with Differentiable Pooling.
- in_feats : int
- Size of each input sample.
- hidden_dim : int
- Size of hidden layer dimension of GNN.
- embed_dim : int
- Size of embeded node feature, output size of GNN.
- num_classes : int
- Number of target classes.
- num_layers : int
- Number of GNN layers.
- num_pool_layers : int
- Number of pooling.
- assign_dim : int
- Embedding size after the first pooling.
- pooling_ratio : float
- Size of each poolling ratio.
- batch_size : int
- Size of each mini-batch.
- dropout : float, optional
- Size of dropout, default: 0.5.
- no_link_pred : bool, optional
- If True, use link prediction loss, default: True.
-
forward
(batch)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'diffpool'¶
-
class
cogdl.models.nn.gcnii.
GCNII
(in_feats, hidden_size, out_feats, num_layers, dropout=0.5, alpha=0.1, lmbda=1, wd1=0.0, wd2=0.0, residual=False, actnn=False)[source]¶ Bases:
cogdl.models.base_model.BaseModel
Implementation of GCNII in paper “Simple and Deep Graph Convolutional Networks” <https://arxiv.org/abs/2007.02133>.
- in_feats : int
- Size of each input sample
- hidden_size : int
- Size of each hidden unit
- out_feats : int
- Size of each out sample
num_layers : int dropout : float alpha : float
Parameter of initial residual connection- lmbda : float
- Parameter of identity mapping
- wd1 : float
- Weight-decay for Fully-connected layers
- wd2 : float
- Weight-decay for convolutional layers
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'gcnii'¶
-
class
cogdl.models.nn.sign.
MLP
(in_feats, out_feats, hidden_size, num_layers, dropout=0.0, activation='relu', norm=None, act_first=False, bias=True)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'mlp'¶
-
-
class
cogdl.models.nn.pyg_gcn.
GCN
(num_features, num_classes, hidden_size, num_layers, dropout)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'pyg_gcn'¶
-
-
class
cogdl.models.nn.mixhop.
MixHop
(num_features, num_classes, dropout, layer1_pows, layer2_pows)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'mixhop'¶
-
-
class
cogdl.models.nn.gat.
GAT
(in_feats, hidden_size, out_features, num_layers, dropout, attn_drop, alpha, nhead, residual, last_nhead, norm=None)[source]¶ Bases:
cogdl.models.base_model.BaseModel
The GAT model from the “Graph Attention Networks” paper
- Args:
- num_features (int) : Number of input features. num_classes (int) : Number of classes. hidden_size (int) : The dimension of node representation. dropout (float) : Dropout rate for model training. alpha (float) : Coefficient of leaky_relu. nheads (int) : Number of attention heads.
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'gat'¶
-
class
cogdl.models.nn.han.
HAN
(num_edge, w_in, w_out, num_class, num_nodes, num_layers)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(graph, target_x, target)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'han'¶
-
-
class
cogdl.models.nn.ppnp.
PPNP
(nfeat, nhid, nclass, num_layers, dropout, propagation, alpha, niter, cache=True)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'ppnp'¶
-
-
class
cogdl.models.nn.grace.
GRACE
(in_feats: int, hidden_size: int, proj_hidden_size: int, num_layers: int, drop_feature_rates: List[float], drop_edge_rates: List[float], tau: float = 0.5, activation: str = 'relu', batch_size: int = -1)[source]¶ Bases:
cogdl.models.self_supervised_model.SelfSupervisedContrastiveModel
-
forward
(graph: cogdl.data.data.Graph, x: torch.Tensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'grace'¶
-
-
class
cogdl.models.nn.dgl_jknet.
JKNet
(in_features, out_features, n_layers, n_units, node_aggregation, layer_aggregation)[source]¶ Bases:
cogdl.models.supervised_model.SupervisedHomogeneousNodeClassificationModel
-
forward
(graph, x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'jknet'¶
-
-
class
cogdl.models.nn.pprgo.
PPRGo
(in_feats, hidden_size, out_feats, num_layers, alpha, dropout, activation='relu', nprop=2)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(x, targets, ppr_scores)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'pprgo'¶
-
-
class
cogdl.models.nn.gin.
GIN
(num_layers, in_feats, out_feats, hidden_dim, num_mlp_layers, eps=0, pooling='sum', train_eps=False, dropout=0.5)[source]¶ Bases:
cogdl.models.base_model.BaseModel
Graph Isomorphism Network from paper “How Powerful are Graph Neural Networks?”.
- Args:
- num_layers : int
- Number of GIN layers
- in_feats : int
- Size of each input sample
- out_feats : int
- Size of each output sample
- hidden_dim : int
- Size of each hidden layer dimension
- num_mlp_layers : int
- Number of MLP layers
- eps : float32, optional
- Initial epsilon value, default:
0
- pooling : str, optional
- Aggregator type to use, default:
sum
- train_eps : bool, optional
- If True, epsilon will be a learnable parameter, default:
True
-
forward
(batch)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'gin'¶
-
class
cogdl.models.nn.pyg_dgcnn.
DGCNN
(in_feats, hidden_dim, out_feats, k=20, dropout=0.5)[source]¶ Bases:
cogdl.models.base_model.BaseModel
EdgeConv and DynamicGraph in paper “Dynamic Graph CNN for Learning on Point Clouds” <https://arxiv.org/pdf/1801.07829.pdf>__ .
- in_feats : int
- Size of each input sample.
- out_feats : int
- Size of each output sample.
- hidden_dim : int
- Dimension of hidden layer embedding.
- k : int
- Number of neareast neighbors.
-
forward
(batch)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'dgcnn'¶
-
class
cogdl.models.nn.grand.
Grand
(nfeat, nhid, nclass, input_droprate, hidden_droprate, use_bn, dropnode_rate, tem, lam, order, sample, alpha)[source]¶ Bases:
cogdl.models.base_model.BaseModel
Implementation of GRAND in paper “Graph Random Neural Networks for Semi-Supervised Learning on Graphs” <https://arxiv.org/abs/2005.11079>
- nfeat : int
- Size of each input features.
- nhid : int
- Size of hidden features.
- nclass : int
- Number of output classes.
- input_droprate : float
- Dropout rate of input features.
- hidden_droprate : float
- Dropout rate of hidden features.
- use_bn : bool
- Using batch normalization.
- dropnode_rate : float
- Rate of dropping elements of input features
- tem : float
- Temperature to sharpen predictions.
- lam : float
- Proportion of consistency loss of unlabelled data
- order : int
- Order of adjacency matrix
- sample : int
- Number of augmentations for consistency loss
alpha : float
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'grand'¶
-
class
cogdl.models.nn.pyg_gtn.
GTN
(num_edge, num_channels, w_in, w_out, num_class, num_nodes, num_layers)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(graph, target_x, target)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'gtn'¶
-
-
class
cogdl.models.nn.rgcn.
LinkPredictRGCN
(num_entities, num_rels, hidden_size, num_layers, regularizer='basis', num_bases=None, self_loop=True, sampling_rate=0.01, penalty=0, dropout=0.0, self_dropout=0.0)[source]¶ Bases:
cogdl.utils.link_prediction_utils.GNNLinkPredict
,cogdl.models.base_model.BaseModel
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'rgcn'¶
-
-
class
cogdl.models.nn.deepergcn.
DeeperGCN
(in_feat, hidden_size, out_feat, num_layers, activation='relu', dropout=0.0, aggr='max', beta=1.0, p=1.0, learn_beta=False, learn_p=False, learn_msg_scale=True, use_msg_norm=False, edge_attr_size=None)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'deepergcn'¶
-
-
class
cogdl.models.nn.drgat.
DrGAT
(num_features, num_classes, hidden_size, num_heads, dropout)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'drgat'¶
-
-
class
cogdl.models.nn.infograph.
InfoGraph
(in_feats, hidden_dim, out_feats, num_layers=3, sup=False)[source]¶ Bases:
cogdl.models.base_model.BaseModel
- Implimentation of Infograph in paper `”InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation
Learning via Mutual Information Maximization” <https://openreview.net/forum?id=r1lfF2NYvH>__. `
- in_feats : int
- Size of each input sample.
- out_feats : int
- Size of each output sample.
- num_layers : int, optional
- Number of MLP layers in encoder, default:
3
. - unsup : bool, optional
- Use unsupervised model if True, default:
True
.
-
forward
(batch)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'infograph'¶
-
class
cogdl.models.nn.dropedge_gcn.
DropEdge_GCN
(nfeat, nhid, nclass, nhidlayer, dropout, baseblock, inputlayer, outputlayer, nbaselayer, activation, withbn, withloop, aggrmethod)[source]¶ Bases:
cogdl.models.base_model.BaseModel
DropEdge: Towards Deep Graph Convolutional Networks on Node Classification Applying DropEdge to GCN @ https://arxiv.org/pdf/1907.10903.pdfThe model for the single kind of deepgcn blocks. The model architecture likes: inputlayer(nfeat)–block(nbaselayer, nhid)–…–outputlayer(nclass)–softmax(nclass)
The total layer is nhidlayer*nbaselayer + 2. All options are configurable.
- Args:
Initial function. :param nfeat: the input feature dimension. :param nhid: the hidden feature dimension. :param nclass: the output feature dimension. :param nhidlayer: the number of hidden blocks. :param dropout: the dropout ratio. :param baseblock: the baseblock type, can be “mutigcn”, “resgcn”, “densegcn” and “inceptiongcn”. :param inputlayer: the input layer type, can be “gcn”, “dense”, “none”. :param outputlayer: the input layer type, can be “gcn”, “dense”. :param nbaselayer: the number of layers in one hidden block. :param activation: the activation function, default is ReLu. :param withbn: using batch normalization in graph convolution. :param withloop: using self feature modeling in graph convolution. :param aggrmethod: the aggregation function for baseblock, can be “concat” and “add”. For “resgcn”, the default
is “add”, for others the default is “concat”.
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'dropedge_gcn'¶
-
class
cogdl.models.nn.disengcn.
DisenGCN
(in_feats, hidden_size, num_classes, K, iterations, tau, dropout, activation)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'disengcn'¶
-
-
class
cogdl.models.nn.mlp.
MLP
(in_feats, out_feats, hidden_size, num_layers, dropout=0.0, activation='relu', norm=None, act_first=False, bias=True)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'mlp'¶
-
-
class
cogdl.models.nn.sgc.
sgc
(in_feats, out_feats)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'sgc'¶
-
-
class
cogdl.models.nn.stpgnn.
stpgnn
(args)[source]¶ Bases:
cogdl.models.base_model.BaseModel
Implementation of models in paper “Strategies for Pre-training Graph Neural Networks”. <https://arxiv.org/abs/1905.12265>
-
model_name
= 'stpgnn'¶
-
-
class
cogdl.models.nn.sortpool.
SortPool
(in_feats, hidden_dim, num_classes, num_layers, out_channel, kernel_size, k=30, dropout=0.5)[source]¶ Bases:
cogdl.models.base_model.BaseModel
Implimentation of sortpooling in paper “An End-to-End Deep Learning Architecture for Graph Classification” <https://www.cse.wustl.edu/~muhan/papers/AAAI_2018_DGCNN.pdf>__.
- in_feats : int
- Size of each input sample.
- out_feats : int
- Size of each output sample.
- hidden_dim : int
- Dimension of hidden layer embedding.
- num_classes : int
- Number of target classes.
- num_layers : int
- Number of graph neural network layers before pooling.
- k : int, optional
- Number of selected features to sort, default:
30
. - out_channel : int
- Number of the first convolution’s output channels.
- kernel_size : int
- Size of the first convolution’s kernel.
- dropout : float, optional
- Size of dropout, default:
0.5
.
-
forward
(batch)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'sortpool'¶
-
class
cogdl.models.nn.pyg_srgcn.
SRGCN
(in_feats, hidden_size, out_feats, attention, activation, nhop, normalization, dropout, node_dropout, alpha, nhead, subheads)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'srgcn'¶
-
-
class
cogdl.models.nn.dgl_gcc.
GCC
(load_path)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
model_name
= 'gcc'¶
-
train
(data)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation - mode (
False
). Default:True
.
- mode (bool): whether to set training mode (
- Returns:
- Module: self
-
-
class
cogdl.models.nn.unsup_graphsage.
SAGE
(num_features, hidden_size, num_layers, sample_size, dropout, walk_length, negative_samples)[source]¶ Bases:
cogdl.models.base_model.BaseModel
Implementation of unsupervised GraphSAGE in paper “Inductive Representation Learning on Large Graphs” <https://cs.stanford.edu/people/jure/pubs/graphsage-nips17.pdf>
- num_features : int
- Size of each input sample
hidden_size : int num_layers : int
The number of GNN layers.- samples_size : list
- The number sampled neighbors of different orders
dropout : float walk_length : int
The length of random walknegative_samples : int
-
forward
(graph)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'unsup_graphsage'¶
-
class
cogdl.models.nn.pyg_sagpool.
SAGPoolNetwork
(nfeat, nhid, nclass, dropout, pooling_ratio, pooling_layer_type)[source]¶ Bases:
cogdl.models.base_model.BaseModel
-
forward
(batch)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
model_name
= 'sagpool'¶
-
AGC Model¶
Model Module¶
-
cogdl.models.
register_model
(name)[source]¶ New model types can be added to cogdl with the
register_model()
function decorator.For example:
@register_model('gat') class GAT(BaseModel): (...)
- Args:
- name (str): the name of the model