{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Centrality Tutorial" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook covers some of the centrality algorithms implemented in NetworKit. [Centrality](http://en.wikipedia.org/wiki/Centrality) measures the relative importance of a node within a graph. Code for centrality analysis in NetworKit is grouped into the [centrality](https://networkit.github.io/dev-docs/python_api/centrality.html) module." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As always, the first step is to start by importing NetworKit." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "import networkit as nk" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Every algorithm within the `centrality` module in NetworKit has a `run()` function that must be called in order for the algorithm to be executed after initialization." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Betweeness Centrality\n", "\n", "Betweenness centrality measures the extent to which a vertex lies on shortest paths between other vertices. Vertices with high betweenness may have considerable influence within a network." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Betweenness" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The fastest algorithm for the exact computation of the betweenness centrality is Brandes' algorithm, and it is implemented in the [Betweenness](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=betweenness#networkit.centrality.Betweenness) class. The constructor [Betweenness(G, normalized=False, computeEdgeCentrality=False)](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=betweenness#networkit.centrality.Betweenness) expects a mandatory [networkit.Graph](https://networkit.github.io/dev-docs/python_api/graph.html?highlight=graph#module-networkit.graph), and two optional parameters ` normalized` and `computeEdgeCentrality`. If `normalized` is set to true the centrality scores will be normalized. The parameter `computeEdgeCentrality` should be set to true if edge betweeness should be computed as well. \n", "\n", "We start by reading a graph, and then inititalising the algorithm with the parameters we want. Assuming we shall use the default parameters, i.e., centrality scores should not be normalized and edge betweeness should not be computed, we only need to pass a graph to the method." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Read graph\n", "G = nk.readGraph(\"../input/power.graph\", nk.Format.METIS)\n", "# Initalize algorithm\n", "btwn = nk.centrality.Betweenness(G)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Run \n", "btwn.run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# The 10 most central nodes according to betweenness are then\n", "btwn.ranking()[:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### ApproxBetweenness" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since exact calculation of betweenness scores is often out of reach, NetworKit provides an approximation algorithm based on path sampling. This functionality is provided by the ApproxBetweenness class, [(G, epsilon=0.01, delta=0.1, universalConstant=1.0)](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=approx#networkit.centrality.ApproxBetweenness). It expects a mandatory undirected [networkit.Graph](https://networkit.github.io/dev-docs/python_api/graph.html?highlight=graph#module-networkit.graph). Here we estimate betweenness centrality in `PGPgiantcompo`, with a guarantee that the error is no larger than an additive constant `epsilon` with a probability of at least 1 - `delta`. The `universalConstant` is used to compute the sample size." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Read graph\n", "G = nk.readGraph(\"../input/PGPgiantcompo.graph\", nk.Format.METIS)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "ab = nk.centrality.ApproxBetweenness(G, epsilon=0.1)\n", "ab.run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# The 10 most central nodes according to betweenness are then\n", "ab.ranking()[:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### EstimateBetweenness" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [EstimateBetweenness](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=estimate#networkit.centrality.EstimateBetweenness) class estimates the betweenness centrality according to the algorthm described in `Sanders, Geisberger, Schultes: Better Approximation of Betweenness Centrality`.\n", "Despite the algorithm performing well in practice, no guarantee can be given. If a theoritical guarantee is required, use the [ApproxBetweenness](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=approx#networkit.centrality.ApproxBetweenness) algorithm.\n", "\n", "The constructor `EstimateBetweenness(Graph G, count nSamples, bool normalized=False, bool parallel_flag=False)` expects a [networkit.Graph](https://networkit.github.io/dev-docs/python_api/graph.html?highlight=graph#module-networkit.graph) and the number of samples as mandatory parameters. Further, the centrality values can be optionally normalized by setting `normalized` to `True`; by default the centrality values are not normalized. This algorithm can be run in parallel by setting the `parallel_flag` to true. Running in parallel, however, comes with an additional cost in memory of z + 3z + t. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Read a graph\n", "G = nk.readGraph(\"../input/PGPgiantcompo.graph\", nk.Format.METIS)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Initialize algorithm\n", "est = nk.centrality.EstimateBetweenness(G, 50, True, False)\n", "est.run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "#The 10 most central nodes according to betweenness are then\n", "est.ranking()[:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### KadabraBetweenness" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[KadabraBetweenness](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=kadabra#networkit.centrality.KadabraBetweenness) is an ADaptive Algorithm for Betweennes via Random Approximation presented by [Borassi M., and Natale E.](https://arxiv.org/abs/1604.08553).\n", "NetworKit provides variants of the algorithm that either reduce this memory consumption to O(1) or ensure that deterministic results are obtained. For more details about the implementation, see this [paper](https://arxiv.org/abs/1903.09422) by `Van der Grinten A., Angriman E., and Meyerhenke H. (2019)`.\n", "\n", "The constructor [KadabraBetweennes(G, err=0.01, delta=0.1, deterministic=False, k=0, unionSample=0, startFactor=100)](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=kadabra#networkit.centrality.KadabraBetweenness) only expects a [networkit.Graph](https://networkit.github.io/dev-docs/python_api/graph.html?highlight=graph#module-networkit.graph) as a mandatory parameter. `err` is the maximum additive error guaranteeded when approximating the betweenness centrality of all nodes, while `delta` is the probability that the values of the betweenness centrality are within the error guarantee. The parameter `k` expresses the number of top k-nodes to be computed; if set to zero the approximate betweenness centrality of all nodes will be computed. `unionSample` and `startFactor` are algorithm parameters that are automatically chosen.\n", "\n", "Hence, computing the KadabraBetweenness for all nodes with a maximum additive error of 0.05, and a probabilty of 0.8 that the values of the betweenness centrality are within that range can be done as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Read a graph\n", "G = nk.readGraph(\"../input/PGPgiantcompo.graph\", nk.Format.METIS)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Initialize algorithm\n", "kadabra = nk.centrality.KadabraBetweenness(G, 0.05, 0.8)\n", "kadabra.run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "#The 10 most central nodes according to betweenness are then\n", "kadabra.ranking()[:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Closeness Centrality\n", "\n", "Closeness centrality indicates how close a node is to all other nodes in the network. It is calculated as inverse of the average of the shortest path length from the node to every other node in the network." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Closeness" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The NetworKit Closeness class computes the exact closeness centrality of all the nodes of a graph. The constructor [Closeness(G, normalized=True, variant=networkit.centrality.ClosenessVariant.STANDARD)](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=closeness#networkit.centrality.Closeness) expects a mandatory [networkit.Graph](https://networkit.github.io/dev-docs/python_api/graph.html?highlight=graph#module-networkit.graph), and two optional parameters, `normalized` and `variant`. The centrality values can be optionally normalized for unweighted graphs by setting `normalized` to True; by default the centrality values are normalized. The parameter`variant` dictates the closeness variant to be used when computing the closeness. Set `variant` to `networkit.centrality.ClosenessVariant.STANDARD` to use the standard definition of closeness which is defined for connected graphs only, or to `networkit.centrality.ClosenessVariant.GENERALIZED` to use the generalized definition of closeness that is also defined for non-connected graphs. \n", "\n", "As the graph we will use is weighted the values should not be normalized. So, to compute the closeness of a graph using the generalised closeness variant because our graph is disconnected we can do the following:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Read a graph\n", "G = nk.readGraph(\"../input/foodweb-baydry.konect\", nk.Format.KONECT)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Initialize algorithm\n", "close = nk.centrality.Closeness(G, False, nk.centrality.ClosenessVariant.GENERALIZED)\n", "close.run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "#The 10 most central nodes according to betweenness are then\n", "close.ranking() [:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### ApproxCloseness" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since exact calculation of closeness scores is often out of reach, NetworKit provides an approximation closeness centrality according to the algorithm described in '`Cohen et al., Computing Classic Closeness Centrality, at Scale`. \n", "\n", "The constructor expects a mandatory connected [networkit.Graph](https://networkit.github.io/dev-docs/python_api/graph.html?highlight=graph#module-networkit.graph) among other optional values:\n", "[ApproxCloseness(G, nSamples, epsilon=0.1, normalized=False, type=OUTBOUND)](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=approx#networkit.centrality.ApproxCloseness). `nSamples` dictates the number of samples that should be used when computing the approximate closeness; this value depends on the size of graph and the more samples that are used, the better the results are likely to be. `epsilon` can be any value between 0 and $\\infty$-1 and is used for the error guarantee. The centrality values can be optionally normalized by setting `normalized` to True; by default the centrality values are not normalized. If G is undirected, `type` can be ignored. Otherwise use `0` for inbound centrality, `1` for outbound centrality and `2` for the sum of both." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Read a graph\n", "G = nk.readGraph(\"../input/foodweb-baydry.konect\", nk.Format.KONECT)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "ac = nk.centrality.ApproxCloseness(G, 100)\n", "ac.run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# 10 most central nodes according to closeness are \n", "ac.ranking()[:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### HarmonicCloseness" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Harmonic centrality is a variant of closeness centrality. It deals with the problem of unconnected graphs.\n", "The constructor, [HarmonicCloseness(G, normalized=True)](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=harmon#networkit.centrality.HarmonicCloseness), expects a [networkit.Graph](https://networkit.github.io/dev-docs/python_api/graph.html?highlight=graph#module-networkit.graph) and an optional parameter `normalized` that is set to true by default. If `normalized` is set to true, centrality scores are normalized to into an interval between 0 and 1. \n", "\n", "Computing the harmonic closeness of a graph, and then normalizing the scores can be done as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Read a graph\n", "G = nk.readGraph(\"../input/foodweb-baydry.konect\", nk.Format.KONECT)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "harmonic = nk.centrality.HarmonicCloseness(G, True)\n", "harmonic.run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# 10 most central nodes according to closeness are\n", "harmonic.ranking()[:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### TopHarmonicCloseness" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [TopHarmonicCloseness(G, k=1, useBFSbound=True)](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=top#networkit.centrality.TopHarmonicCloseness) algorithm finds the exact top k nodes with highest harmonic closeness centrality. It is faster than computing the harmonic closeness for all nodes. The implementation is based on [Computing Top-k Centrality Faster in Unweighted Graphs, Bergamini et al., ALENEX16](https://arxiv.org/abs/1710.01143). The parameter `k` expresses the number of nodes with the highest centrality scores that the algorithm must find. The algorithms are based on two heuristics. We recommend to use `useBFSbound = False` for complex networks (or networks with small diameter) and `useBFSbound = True` for street networks (or networks with large diameters). " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Read a graph\n", "G = nk.readGraph(\"../input/foodweb-baydry.konect\", nk.Format.KONECT)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Initialize algorithm\n", "topHarmClose = nk.centrality.TopHarmonicCloseness(G, 10, useNBbound=False)\n", "topHarmClose.run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Unlike all the other algorithms we have seen before, `TopHarmonicCloseness` does not have a `ranking` method which stores a sorted list of all centrality scores. Instead, we have the [topkNodesList(includeTrail=False)](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=topknodes#networkit.centrality.TopHarmonicCloseness.topkNodesList) method. By calling this method, we can print the the top `k` central nodes. If `includeTrail` is true the closeness centrality of some nodes below the top-k could be equal to the k-th closeness will also be printed. Note, that the resulting vector may be longet than `k`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# k most central nodes according to closeness are\n", "topHarmClose.topkNodesList()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Alternatively, the [topKScores(includeTrail=False)](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=topkscores#networkit.centrality.TopCloseness.topkScoresList) method may be used." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "topHarmClose.topkScoresList()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### TopCloseness" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [TopCloseness(G, k=1, first_heu=True, sec_heu=True)](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=topcloseness#networkit.centrality.TopCloseness) algorithm finds the exact top k nodes with highest harmonic closeness centrality. It is faster than computing the harmonic closeness for all nodes. The implementation is based on [Computing Top-k Centrality Faster in Unweighted Graphs, Bergamini et al., ALENEX16](https://arxiv.org/abs/1710.01143). The algorithm is based on two independent heuristics described in the above paper. The parameter `k` expresses the number of nodes with the highest centrality scores that have to be found by the algorithm. If `first_heu` is true, the neighborhood-based lower bound is computed and nodes are sorted according to it. If false, nodes are simply sorted by degree. If `sec_heu` is true, the BFSbound is re-computed at each iteration. If false, BFScut is used.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Read a graph\n", "G = nk.readGraph(\"../input/foodweb-baydry.konect\", nk.Format.KONECT)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Initialize algorithm\n", "topClose = nk.centrality.TopCloseness(G, 10, first_heu=True)\n", "topClose.run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# k most central nodes according to closeness are\n", "topClose.topkNodesList()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Degree Centrality\n", "\n", "Degree is a centrality measure that counts how many neighbors a node has." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Degree Centrality" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Degree centrality is a centrality measure that counts how many neighbors a node has. Degree centrality can be computed in NetworKit using the [DegreeCentrality(G, normalized=False, outDeg=True, ignoreSelfLoops=True)](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=degreecentrality#networkit.centrality.DegreeCentrality) class. It expects a [networkit.Graph](https://networkit.github.io/dev-docs/python_api/graph.html?highlight=graph#module-networkit.graph). Set `normalized` to True if the centrality scored should be normalized to values between 0 and 1. If the network is directed, we have two versions of the measure: in-degree is the number of incoming links; out-degree is the number of outgoing links. By default, the out-degree is used. \n", "\n", "For an undirected graph, using the default parameters, the degree centrality can be computed as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Read graph \n", "G = nk.readGraph(\"../input/wiki-Vote.txt\", nk.Format.SNAP)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Initialize algorithm\n", "deg = nk.centrality.DegreeCentrality(G)\n", "deg.run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# 10 most central nodes according to degree centrality are\n", "deg.ranking()[:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Eigenvector Centrality and PageRank" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Eigenvector centrality measures the influence of a node in a network by assinging relative scores to each node in a network." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Eigenvector Centrality and PageRank" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Eigenvector centrality and its variant PageRank assign relative importance to nodes according to their connections, incorporating the idea that edges to high-scoring nodes contribute more. PageRank is a version of eigenvector centrality which introduces a damping factor, modeling a random web surfer which at some point stops following links and jumps to a random page. In PageRank theory, centrality is understood as the probability of such a web surfer to arrive on a certain page. Our implementation of both measures is based on parallel power iteration, a relatively simple eigensolver.\n", "\n", "We demonstrate it here on the small Karate club graph. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Read graph \n", "K = nk.readGraph(\"../input/karate.graph\", nk.Format.METIS)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Eigenvector centrality\n", "ec = nk.centrality.EigenvectorCentrality(K)\n", "ec.run()\n", "ec.ranking()[:10] # the 10 most central nodes" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# PageRank\n", "pr = nk.centrality.PageRank(K, damp=0.85, tol=1e-9)\n", "pr.run()\n", "pr.ranking()[:10] # the 10 most central nodes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By default, PageRank uses the L2 norm as convergence criterion. Alternatively, one can also use the L1 norm.\n", "Additionally, one can also set the maximum amount of iterations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# PageRank using L1 norm, and a 100 maximum iterations\n", "pr = nk.centrality.PageRank(K, damp=0.85, tol=1e-9)\n", "pr.norm = nk.centrality.Norm.L1_NORM\n", "pr.maxIterations = 100\n", "pr.run()\n", "pr.ranking()[:10] # the 10 most central nodes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Katz Centrality" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Katz centrality computes the relative influence of a node within a network by measuring the number of the immediate neighbors, and also all other nodes in the network that connect to the node through these immediate neighbors. Connections made with distant neighbors are, however, penalized by an attenuation factor $\\alpha$. Each path or connection between a pair of nodes is assigned a weight determined by $\\alpha$ and the distance between nodes as $\\alpha ^d$.\n", "\n", "The [KatzCentrality(G, alpha=5e-4, beta=0.1, tol=1e-8)](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=katz#networkit.centrality.KatzCentrality) constructor expects a [networkit.Graph](https://networkit.github.io/dev-docs/python_api/graph.html?highlight=graph#module-networkit.graph) as a mandatory parameter. The parameter `alpha` is the damping of the matrix vector result, while `beta` is a constant value added to the centrality of each vertex. The parameter `tol` dictates the tolerance for convergence. Computing the Katz centrality in NetworKit can be like below. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "G = nk.readGraph(\"../input/karate.graph\", nk.Format.METIS)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Initialize algorithm\n", "katz = nk.centrality.KatzCentrality(G, 1e-3)\n", "katz.run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# 10 most central nodes\n", "katz.ranking()[:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Others" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Spanning Edge Centrality" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The spanning centrality of an edge `e` in an undirected graph is the fraction of the spanning trees of the graph that contain the edge `e`. The [SpanningEdgeCentrality(const Graph& G, double tol = 0.1)](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=spanning#networkit.centrality.SpanningEdgeCentrality) constructor expects a [networkit.Graph](https://networkit.github.io/dev-docs/python_api/graph.html?highlight=graph#module-networkit.graph) as a mandatory parameter. The `tol` parameter determines the tolerance used for approximation: with probability at least 1-1/n, the approximated scores are within a factor 1+`tol` from the exact scores. \n", "\n", "Computing the SpanningEdge Centrality in NetworKit is then demonstrated below." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Read graph \n", "G = nk.readGraph(\"../input/karate.graph\", nk.Format.METIS)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Initialize algorithm\n", "G.indexEdges()\n", "span = nk.centrality.SpanningEdgeCentrality(G, 1e-6)\n", "span.run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using the [scores()](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=scores#networkit.centrality.SpanningEdgeCentrality.scores) method we can get a vector containing the SEC score for each edge in the graph." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "span.scores()[:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Approximate Spanning Edge Centrality" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An algorithm from [Hayashi et al.](https://www.ijcai.org/Proceedings/16/Papers/525.pdf) based on sampling of spanning trees uniformly at random approximates the spanning edge centrality of each edge of a graph up to a maximum absolute error. `ApproxSpanningEdge(G, eps=0.1)` expects as input an undirected graph, and the maximum absolute error `eps`. The algorithm requires the edges of the graph to be indexed." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "# Read a graph\n", "G = nk.graphtools.toUndirected(nk.readGraph(\"../input/foodweb-baydry.konect\", nk.Format.KONECT))\n", "G.indexEdges()\n", "apx_span = nk.centrality.ApproxSpanningEdge(G, 0.05)\n", "_=apx_span.run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "apx_span.scores()[:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Local ClusteringCoefficient" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The local clustering coefficient of a vertex in a graph quantifies how close its neighbours are to being a clique (complete graph).\n", "\n", "The [LocalClusteringCoefficient(G, turbo=False)](https://networkit.github.io/dev-docs/python_api/centrality.html?highlight=coeff#networkit.centrality.LocalClusteringCoefficient) constructor expects a [networkit.Graph](https://networkit.github.io/dev-docs/python_api/graph.html?highlight=graph#module-networkit.graph) and a Boolean parameter `turbo` which is by default initialized to false. `turbo` activates a (sequential, but fast) pre-processing step using ideas from [this paper](https://dl.acm.org/citation.cfm?id=2790175). This reduces the running time significantly for most graphs. However, the turbo mode needs O(m) additional memory. The turbo mode is particularly effective for graphs with nodes of very high degree and a very skewed degree distribution.\n", "\n", "We demonstrate the use using the same graph as before, " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Read graph \n", "G = nk.readGraph(\"../input/karate.graph\", nk.Format.METIS)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Initialize algorithm\n", "lcc = nk.centrality.LocalClusteringCoefficient(G)\n", "lcc.run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "lcc.ranking()[:10]" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 2 }