Revisiting Graph Algorithms

Anshika Bhargava
5 min readJul 17, 2022

Let us revise the time complexities of some of the most important graph algorithms.

A dense network
Source: https://pixabay.com/illustrations/network-social-abstract-3139214/

It has been some time that I have started taking interviews and often I see candidates messing up the time complexities of the graph algorithms. Agreed that they can be tricky to keep in handy, and confusing to remember as many times they look much of a muchness.

Here, I have tried to cover the most used Graph algorithms, along with their time and space complexities and how they are derived. Who knows this might come in handy someday!

Note that this article will not cover the actual algorithms. I know there are thousands of awesome articles, books and videos that cover that! Consider this as a summary of various complexities.

Prerequisites: This article assumes that you already know these algorithms!!

For the rest of the article, we will refer to a graph G, with V vertices and E edges. The graph can be directed or undirected based on the algorithm, and this will be mentioned with the algorithms. We will be considering the adjacency list representation. This is because for most of the practical applications, that graphs tend to be sparse, i.e, have a huge number of vertices and small average vertex degree.

Adjacency List Representation

For an graph G,

  1. The space complexity for adjacency list representation is O(V+E) as we have one list for each vertex and one link for each edge.
  2. An edge can be added in O(1) as the vertex just needs to be added to the list.
  3. To check if there exists an edge from v to w, all the adjacent vertices of v need to be traversed. Hence, the complexity is O(degree(v)). If the graph is a directed graph, it is O(outdegree(v)).

Graph Traversals

Depth First Search

For any graph G,

  1. Time complexity : O(V+E), as each vertex is visited once and all the edges are examined.
  2. Space complexity : O(V), to keep track of the visited vertices, and the recursive stack.

Breadth First Search

For any graph G,

  1. Time complexity : O(V+E), as each vertex is visited once and all the edges are examined.
  2. Space complexity : O(V), to keep track of the visited vertices, and the queue to which vertices are pushed.

Topological Sorting

Unlike the algorithms mentioned above, topological sorting only makes sense for a directed acyclic graph, or commonly known as a DAG. Hence, G here is a DAG.

  1. Time complexity : O(V+E), as DFS traversal is used for finding the topological sort.
  2. Space complexity : O(V), for the additional stack used for storing the order.

Connected Components

Kosaraju — Sharir algorithm

The Kosaraju-Sharir algorithm is used for computing strongly connected components in a DAG.

  1. Time complexity : O(V+E). This is because this algorithm requires running the DFS twice.
  2. Space complexity : O(V). Similar to the DFS and Topological sort algorithms.

Okay, gear up now, because we are going to add weights to our edges now. Now, our edges E will be weighted edges.

Minimum Spanning Tree (MST) algorithms

Kruskal’s algorithm

Kruskal’s algorithm calculates the MST(Minimum Spanning Tree) by taking the least weighted edge first.

  1. Time complexity : O(Elog(E)).
    The Kruskal’s algorithm involves creating a priority queue to keep the edges sorted, which is done in Elog(E). Also, the delete-min operation is performed E times, which takes log(E) time each time, bringing the time complexity to Elog(E).
    Sometimes, you can see the the time complexity of Kruskal’s algorithm to be mentioned as O(Elog(V)). This is equivalent to O(Elog(V)) as in a graph of V vertices, there can be at most edges.
    log(E) = log(V²) = 2log(V) ∈ O(log(V)).
  2. Space complexity : O(E+V)
    We need to have a priority queue to sort the edges, which is O(E), and a union find structure to keep track of vertices, which is O(V).

Prim’s Algorithm

The Prim’s Algorithm is also used for calculating the MST. Unlike the Kruskal’s algorithm, Prim’s algorithm starts constructing a MST from a single vertex by adding the next closest vertex to the already formed tree.

  1. Time complexity : O(Elog(V))
    Prim’s algorithm can have different time complexities depending upon the way it is implemented. If we use Binary Heap as the minimum weighted edge data structure, the time complexity of the algorithm can be shown to be Elog(V), which works well for sparse graphs. The time complexity is dominated by the E decrease-key operations having a time complexity of log(V).
  2. Space complexity : O(V+E)
    For the priority queue containing vertices.

Shortest Path Algorithms

Dijkstra’s algorithm

Dijkstra’s algorithm is used to calculate the shortest path when the edges have non negative weights. This algorithm will fail if any negative edge is present.

  1. Time Complexity : O(Elog(V))
    It is no coincidence that the time complexity of Dijkstra’s and Prim’s algorithm matches. If we observe carefully, these two algorithms are quite similar. Prim’s algorithm tries to find the edge closest to a tree and Dijkstra’s algorithm tries to find an edge closest to the source vertex. If we use a Binary heap as our priority queue
    — there are at most V insert operations in the heap, one for each vertex which takes a time of log(V) -> Vlog(V)
    — there are at most V delete-min operations, one for each vertex, which takes a time of log(V) -> Vlog(V)
    E decrease key operations, each with a time complexity of log(V) -> E(log(V))
    That brings us to O(Vlog(V) + Elog(V)) or O(Elog(V))
  2. Space complexity : O(V)
    For the priority queue

Bellman Ford Algorithm

This algorithm is used to calculate the shortest path if there are negative weights present in the graph, provided there are no negative weight cycles. Well, that makes sense, as we can just go on that cycle forever to bring the shortest path to -∞.

  1. Time Complexity : O(EV)
    In the Bellman-Ford algorithm, we relax each edge V times. This time complexity can be obtained by using the DP solution.
  2. Space complexity : O(V)
    For the DP array.

It is interesting to see how most of these algorithms are just a special case of another, and it is definitely amazing how they were developed. So, if you have managed to drag yourself till here, thank you!

I would like to end with a quote from Edsger W. Dijkstra,

Progress is possible only if we train ourselves to think about programs without thinking of them as pieces of executable code.

--

--

Anshika Bhargava

Software Engineer at Google | I try to learn and blog