utils
- cogdl.utils.utils.alias_draw(J, q)[source]
Draw sample from a non-uniform discrete distribution using alias sampling.
- cogdl.utils.utils.alias_setup(probs)[source]
Compute utility lists for non-uniform sampling from discrete distributions. Refer to https://hips.seas.harvard.edu/blog/2013/03/03/the-alias-method-efficient-sampling-with-many-discrete-outcomes/ for details
- cogdl.utils.utils.download_url(url, folder, name=None, log=True)[source]
Downloads the content of an URL to a specific folder.
- cogdl.utils.utils.get_memory_usage(print_info=False)[source]
Get accurate gpu memory usage by querying torch runtime
- cogdl.utils.utils.get_norm_layer(norm: str, channels: int)[source]
- Parameters
norm – str type of normalization: layernorm, batchnorm, instancenorm
channels – int size of features for normalization
- cogdl.utils.utils.untar(path, fname, deleteTar=True)[source]
Unpacks the given archive file to the same directory, then (by default) deletes the archive file.
- cogdl.utils.sampling.random_walk(start, length, indptr, indices, p=0.0)[source]
- Parameters
start – np.array(dtype=np.int32)
length – int
indptr – np.array(dtype=np.int32)
indices – np.array(dtype=np.int32)
p – float
- Returns
list(np.array(dtype=np.int32))
- cogdl.utils.graph_utils.add_remaining_self_loops(edge_index, edge_weight=None, fill_value=1, num_nodes=None)[source]
- cogdl.utils.graph_utils.add_self_loops(edge_index, edge_weight=None, fill_value=1, num_nodes=None)[source]
- cogdl.utils.graph_utils.negative_edge_sampling(edge_index: Union[Tuple, torch.Tensor], num_nodes: Optional[int] = None, num_neg_samples: Optional[int] = None, undirected: bool = False)[source]
- cogdl.utils.graph_utils.to_undirected(edge_index, num_nodes=None)[source]
Converts the graph given by
edge_index
to an undirected graph, so that \((j,i) \in \mathcal{E}\) for every edge \((i,j) \in \mathcal{E}\).
- class cogdl.utils.link_prediction_utils.ConvELayer(dim, num_filter=20, kernel_size=7, k_w=10, dropout=0.3)[source]
Bases:
torch.nn.modules.module.Module
- forward(sub_emb, obj_emb, rel_emb)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cogdl.utils.link_prediction_utils.DistMultLayer[source]
Bases:
torch.nn.modules.module.Module
- forward(sub_emb, obj_emb, rel_emb)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cogdl.utils.link_prediction_utils.GNNLinkPredict[source]
Bases:
torch.nn.modules.module.Module
- forward(graph)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- cogdl.utils.link_prediction_utils.cal_mrr(embedding, rel_embedding, edge_index, edge_type, scoring, protocol='raw', batch_size=1000, hits=[])[source]
- cogdl.utils.link_prediction_utils.get_filtered_rank(heads, tails, rels, embedding, rel_embedding, batch_size, seen_data)[source]
- cogdl.utils.link_prediction_utils.get_raw_rank(heads, tails, rels, embedding, rel_embedding, batch_size, scoring)[source]
- cogdl.utils.link_prediction_utils.sampling_edge_uniform(edge_index, edge_types, edge_set, sampling_rate, num_rels, label_smoothing=0.0, num_entities=1)[source]
- Parameters
edge_index – edge index of graph
edge_types –
edge_set – set of all edges of the graph, (h, t, r)
sampling_rate –
num_rels –
label_smoothing (Optional) –
num_entities (Optional) –
- Returns
sampled existing edges rels: types of smapled existing edges sampled_edges_all: existing edges with corrupted edges sampled_types_all: types of existing and corrupted edges labels: 0/1
- Return type
sampled_edges
- cogdl.utils.ppr_utils.calc_ppr_topk_parallel(indptr, indices, deg, alpha, epsilon, nodes, topk)[source]
- cogdl.utils.ppr_utils.ppr_topk(adj_matrix, alpha, epsilon, nodes, topk)[source]
Calculate the PPR matrix approximately using Anderson.
- cogdl.utils.ppr_utils.topk_ppr_matrix(adj_matrix, alpha, eps, idx, topk, normalization='row')[source]
Create a sparse matrix where each node has up to the topk PPR neighbors and their weights.
- class cogdl.utils.prone_utils.NodeAdaptiveEncoder[source]
Bases:
object
shrink negative values in signal/feature matrix
no learning
- class cogdl.utils.prone_utils.PPR(alpha=0.5, k=10)[source]
Bases:
object
applying sparsification to accelerate computation
- class cogdl.utils.prone_utils.SignalRescaling[source]
Bases:
object
- rescale signal of each node according to the degree of the node:
sigmoid(degree)
sigmoid(1/degree)
- class cogdl.utils.srgcn_utils.ColumnUniform[source]
Bases:
torch.nn.modules.module.Module
- forward(edge_index, edge_attr, N)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cogdl.utils.srgcn_utils.EdgeAttention(in_feat)[source]
Bases:
torch.nn.modules.module.Module
- forward(x, edge_index, edge_attr)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cogdl.utils.srgcn_utils.Gaussian(in_feat)[source]
Bases:
torch.nn.modules.module.Module
- forward(x, edge_index, edge_attr)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cogdl.utils.srgcn_utils.HeatKernel(in_feat)[source]
Bases:
torch.nn.modules.module.Module
- forward(x, edge_index, edge_attr)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cogdl.utils.srgcn_utils.Identity(in_feat)[source]
Bases:
torch.nn.modules.module.Module
- forward(x, edge_index, edge_attr)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cogdl.utils.srgcn_utils.NodeAttention(in_feat)[source]
Bases:
torch.nn.modules.module.Module
- forward(x, edge_index, edge_attr)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cogdl.utils.srgcn_utils.NormIdentity[source]
Bases:
torch.nn.modules.module.Module
- forward(edge_index, edge_attr, N)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cogdl.utils.srgcn_utils.PPR(in_feat)[source]
Bases:
torch.nn.modules.module.Module
- forward(x, edge_index, edge_attr)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cogdl.utils.srgcn_utils.RowSoftmax[source]
Bases:
torch.nn.modules.module.Module
- forward(edge_index, edge_attr, N)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cogdl.utils.srgcn_utils.RowUniform[source]
Bases:
torch.nn.modules.module.Module
- forward(edge_index, edge_attr, N)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cogdl.utils.srgcn_utils.SymmetryNorm[source]
Bases:
torch.nn.modules.module.Module
- forward(edge_index, edge_attr, N)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.