layers¶
GCC module¶
-
class
cogdl.layers.gcc_module.
ApplyNodeFunc
(mlp, use_selayer)[source]¶ Bases:
torch.nn.modules.module.Module
Update the node feature hv with MLP, BN and ReLU.
-
class
cogdl.layers.gcc_module.
GATLayer
(g, in_dim, out_dim)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
cogdl.layers.gcc_module.
GraphEncoder
(positional_embedding_size=32, max_node_freq=8, max_edge_freq=8, max_degree=128, freq_embedding_size=32, degree_embedding_size=32, output_dim=32, node_hidden_dim=32, edge_hidden_dim=32, num_layers=6, num_heads=4, num_step_set2set=6, num_layer_set2set=3, norm=False, gnn_model='mpnn', degree_input=False, lstm_as_gate=False)[source]¶ Bases:
torch.nn.modules.module.Module
MPNN from Neural Message Passing for Quantum Chemistry
- node_input_dim : int
- Dimension of input node feature, default to be 15.
- edge_input_dim : int
- Dimension of input edge feature, default to be 15.
- output_dim : int
- Dimension of prediction, default to be 12.
- node_hidden_dim : int
- Dimension of node feature in hidden layers, default to be 64.
- edge_hidden_dim : int
- Dimension of edge feature in hidden layers, default to be 128.
- num_step_message_passing : int
- Number of message passing steps, default to be 6.
- num_step_set2set : int
- Number of set2set steps
- num_layer_set2set : int
- Number of set2set layers
-
forward
(g, return_all_outputs=False)[source]¶ Predict molecule labels
- g : DGLGraph
- Input DGLGraph for molecule(s)
- n_feat : tensor of dtype float32 and shape (B1, D1)
- Node features. B1 for number of nodes and D1 for the node feature size.
- e_feat : tensor of dtype float32 and shape (B2, D2)
- Edge features. B2 for number of edges and D2 for the edge feature size.
res : Predicted labels
-
class
cogdl.layers.gcc_module.
MLP
(num_layers, input_dim, hidden_dim, output_dim, use_selayer)[source]¶ Bases:
torch.nn.modules.module.Module
MLP with linear output
-
class
cogdl.layers.gcc_module.
SELayer
(in_channels, se_channels)[source]¶ Bases:
torch.nn.modules.module.Module
Squeeze-and-excitation networks
-
class
cogdl.layers.gcc_module.
UnsupervisedGAT
(node_input_dim, node_hidden_dim, edge_input_dim, num_layers, num_heads)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
cogdl.layers.gcc_module.
UnsupervisedGIN
(num_layers, num_mlp_layers, input_dim, hidden_dim, output_dim, final_dropout, learn_eps, graph_pooling_type, neighbor_pooling_type, use_selayer)[source]¶ Bases:
torch.nn.modules.module.Module
GIN model
-
class
cogdl.layers.gcc_module.
UnsupervisedMPNN
(output_dim=32, node_input_dim=32, node_hidden_dim=32, edge_input_dim=32, edge_hidden_dim=32, num_step_message_passing=6, lstm_as_gate=False)[source]¶ Bases:
torch.nn.modules.module.Module
MPNN from Neural Message Passing for Quantum Chemistry
- node_input_dim : int
- Dimension of input node feature, default to be 15.
- edge_input_dim : int
- Dimension of input edge feature, default to be 15.
- output_dim : int
- Dimension of prediction, default to be 12.
- node_hidden_dim : int
- Dimension of node feature in hidden layers, default to be 64.
- edge_hidden_dim : int
- Dimension of edge feature in hidden layers, default to be 128.
- num_step_message_passing : int
- Number of message passing steps, default to be 6.
- num_step_set2set : int
- Number of set2set steps
- num_layer_set2set : int
- Number of set2set layers
-
forward
(g, n_feat, e_feat)[source]¶ Predict molecule labels
- g : DGLGraph
- Input DGLGraph for molecule(s)
- n_feat : tensor of dtype float32 and shape (B1, D1)
- Node features. B1 for number of nodes and D1 for the node feature size.
- e_feat : tensor of dtype float32 and shape (B2, D2)
- Edge features. B2 for number of edges and D2 for the edge feature size.
res : Predicted labels
GPT-GNN module¶
-
class
cogdl.layers.gpt_gnn_module.
Classifier
(n_hid, n_out)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
cogdl.layers.gpt_gnn_module.
GNN
(in_dim, n_hid, num_types, num_relations, n_heads, n_layers, dropout=0.2, conv_name='hgt', prev_norm=False, last_norm=False, use_RTE=True)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
cogdl.layers.gpt_gnn_module.
GPT_GNN
(gnn, rem_edge_list, attr_decoder, types, neg_samp_num, device, neg_queue_size=0)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
cogdl.layers.gpt_gnn_module.
GeneralConv
(conv_name, in_hid, out_hid, num_types, num_relations, n_heads, dropout, use_norm=True, use_RTE=True)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
cogdl.layers.gpt_gnn_module.
Graph
[source]¶ Bases:
object
-
node_feature
= None¶ edge_list: index the adjacancy matrix (time) by <target_type, source_type, relation_type, target_id, source_id>
-
-
class
cogdl.layers.gpt_gnn_module.
HGTConv
(in_dim, out_dim, num_types, num_relations, n_heads, dropout=0.2, use_norm=True, use_RTE=True, **kwargs)[source]¶ Bases:
torch_geometric.nn.conv.message_passing.MessagePassing
-
class
cogdl.layers.gpt_gnn_module.
Matcher
(n_hid, n_out, temperature=0.1)[source]¶ Bases:
torch.nn.modules.module.Module
Matching between a pair of nodes to conduct link prediction. Use multi-head attention as matching model.
-
class
cogdl.layers.gpt_gnn_module.
RNNModel
(n_word, ninp, nhid, nlayers, dropout=0.2)[source]¶ Bases:
torch.nn.modules.module.Module
Container module with an encoder, a recurrent module, and a decoder.
-
class
cogdl.layers.gpt_gnn_module.
RelTemporalEncoding
(n_hid, max_len=240, dropout=0.2)[source]¶ Bases:
torch.nn.modules.module.Module
Implement the Temporal Encoding (Sinusoid) function.
-
cogdl.layers.gpt_gnn_module.
preprocess_dataset
(dataset) → cogdl.layers.gpt_gnn_module.Graph[source]¶
-
cogdl.layers.gpt_gnn_module.
sample_subgraph
(graph, time_range, sampled_depth=2, sampled_number=8, inp=None, feature_extractor=<function feature_OAG>)[source]¶ Sample Sub-Graph based on the connection of other nodes with currently sampled nodes We maintain budgets for each node type, indexed by <node_id, time>. Currently sampled nodes are stored in layer_data. After nodes are sampled, we construct the sampled adjacancy matrix.
Link Prediction module¶
-
class
cogdl.layers.link_prediction_module.
ConvELayer
(dim, num_filter=20, kernel_size=7, k_w=10, dropout=0.3)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
cogdl.layers.link_prediction_module.
DistMultLayer
[source]¶ Bases:
torch.nn.modules.module.Module
-
class
cogdl.layers.link_prediction_module.
GNNLinkPredict
(score_func, dim)[source]¶ Bases:
torch.nn.modules.module.Module
-
cogdl.layers.link_prediction_module.
cal_mrr
(embedding, rel_embedding, edge_index, edge_type, scoring, protocol='raw', batch_size=1000, hits=[])[source]¶
-
cogdl.layers.link_prediction_module.
get_filtered_rank
(heads, tails, rels, embedding, rel_embedding, batch_size, seen_data)[source]¶
-
cogdl.layers.link_prediction_module.
get_raw_rank
(heads, tails, rels, embedding, rel_embedding, batch_size, scoring)[source]¶
-
cogdl.layers.link_prediction_module.
sampling_edge_uniform
(edge_index, edge_types, edge_set, sampling_rate, num_rels, label_smoothing=0.0, num_entities=1)[source]¶ - Args:
- edge_index: edge index of graph edge_types: edge_set: set of all edges of the graph, (h, t, r) sampling_rate: num_rels: label_smoothing(Optional): num_entities (Optional):
- Returns:
- sampled_edges: sampled existing edges rels: types of smapled existing edges sampled_edges_all: existing edges with corrupted edges sampled_types_all: types of existing and corrupted edges labels: 0/1
Mean Aggregator module¶
MixHop module¶
PPRGo module¶
-
class
cogdl.layers.pprgo_modules.
PPRGoDataset
(features: torch.Tensor, ppr_matrix: scipy.sparse.csr.csr_matrix, node_indices: torch.Tensor, labels_all: torch.Tensor = None)[source]¶ Bases:
torch.utils.data.dataset.Dataset
ProNE module¶
-
class
cogdl.layers.prone_module.
Gaussian
(mu=0.5, theta=1, rescale=False, k=3)[source]¶ Bases:
object
-
class
cogdl.layers.prone_module.
NodeAdaptiveEncoder
[source]¶ Bases:
object
- shrink negative values in signal/feature matrix
- no learning
-
class
cogdl.layers.prone_module.
PPR
(alpha=0.5, k=10)[source]¶ Bases:
object
applying sparsification to accelerate computation
SELayer module¶
SRGCN module¶
-
class
cogdl.layers.srgcn_module.
EdgeAttention
(in_feat)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
cogdl.layers.srgcn_module.
NodeAttention
(in_feat)[source]¶ Bases:
torch.nn.modules.module.Module
Strategies module¶
-
class
cogdl.layers.strategies_layers.
Discriminator
(hidden_size)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
cogdl.layers.strategies_layers.
GINConv
(hidden_size, input_layer=None, edge_emb=None, edge_encode=None, pooling='sum', feature_concat=False)[source]¶ Bases:
torch.nn.modules.module.Module
Implementation of Graph isomorphism network used in paper “Strategies for Pre-training Graph Neural Networks”. <https://arxiv.org/abs/1905.12265>
- hidden_size : int
- Size of each hidden unit
- input_layer : int, optional
- The size of input node features if not None.
- edge_emb : list, optional
- The number of edge types if not None
- edge_encode : int, optional
- Size of each edge feature if not None
- pooling : str
- Pooling method.
-
class
cogdl.layers.strategies_layers.
GNN
(num_layers, hidden_size, JK='last', dropout=0.5, input_layer=None, edge_encode=None, edge_emb=None, num_atom_type=None, num_chirality_tag=None, concat=False)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
cogdl.layers.strategies_layers.
GNNPred
(num_layers, hidden_size, num_tasks, JK='last', dropout=0, graph_pooling='mean', input_layer=None, edge_encode=None, edge_emb=None, num_atom_type=None, num_chirality_tag=None, concat=True)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
cogdl.layers.strategies_layers.
Pretrainer
(args, transform=None)[source]¶ Bases:
torch.nn.modules.module.Module
Base class for Pre-training Models of paper “Strategies for Pre-training Graph Neural Networks”. <https://arxiv.org/abs/1905.12265>