Research Highlights
Architecture and Hardware

Technical Perspective: Shortening the Path to Designing Efficient Graph Algorithms

"Negative-Weight Single-Source Shortest Paths in Near-Linear Time," by Aaron Bernstein et al., gives an algorithm for negative-length shortest paths (with poly-bounded edge weights) that runs in nearly linear time.

Posted
man observes many solution path, illustration
Read the related Research Paper

Graph theory is an integral component of algorithm design that underlies sparse matrices, relational databases, and networks. Improving the performance of graph algorithms has direct implications to applications ranging from operations research to computational biology. As a result, the design of faster graph algorithms has received extensive attention, leading to tools with wide reaching implications.

One of the most well studied problems in graph algorithms is the shortest path problem. Given weights on edges, compute the shortest path with minimum total weight from vertex s to vertex t. When all weights are non-negative, the famous Dijkstra’s algorithm gives a running time that is nearly linear in the number of edges. Here, nearly linear time refers to a running time of about m, the number of edges, and we use n to denote the number of vertices. Perhaps just as importantly, Dijkstra’s algorithm is taught in almost every introductory data structures course as an introduction to the concept of loops and data structures. On the other hand, shortest paths also represent one of the biggest mysteries in algorithmic graph theory once negative weights are allowed. The O(mn) running time of the Bellman-Ford algorithm from the 1950s was improved to about mn for graphs with poly-bounded edge weights in the mid 1980s, and further progress only started again in 2017.

The accompanying paper gives an algorithm for negative-length shortest paths (with poly-bounded edge weights) that runs in nearly linear time. This algorithm delves into some of the most important primitives in graph algorithms: iterative convergence to solutions, graph partitioning, and multi-source shortest path structures. On each of these topics, it offers alternative approaches that can significantly change the way graph algorithms are approached.

The starting point of this algorithm is a highly intuitive view of iterative convergence. Iterative methods play increasingly more important roles in both theoretical and practical algorithm design: instead of directly solving a problem, more and more algorithms gradually converge to optimal solutions via solving a sequence of intermediate problems. In this paper, such intermediate problems are constructed by removing the lower order bits of the weights. Such problems can in turn be viewed as finding some vertex-based weight adjustments that make all truncated weights non-negative. This view suggests that similar, more graph-oriented, interpretations may exist for many other widely used numerical primitives.

Like many other highly efficient algorithms, the efficiency of this algorithm stems from a divide-and-conquer scheme. Such schemes are long known to be difficult on graphs: the expressiveness of edge connections means any partitioning of vertices will leave a substantial fraction of edges between the components. This is even more difficult on directed graphs: many natural characterizations of directed graph decompositions have well-known counterexamples. To tackle these challenges, this algorithm takes the novel approach of setting the divide-and-conquer base case to acyclic graphs. Negative-weighted shortest paths on such graphs can be computed in linear time by processing vertices in topological order. So, a core primitive of this algorithm is to produce vertex  decompositions along with a vertex ordering that works well for most inter-cluster edges. This decomposition and its variants have already been successfully applied to many other graph problems and will likely find many more applications.

At a higher level, this paper raises the intriguing question of whether it’s possible to develop highly efficient algorithms for directed graphs without creating intermediate undirected problems. Even without taking weights into account, directed graphs are far more complex than undirected: the reachability structures of directed graphs encode n2 bits, way more than the n log n bits needed to represent the connected components of an undirected graph. From a numerical perspective, directed graphs lead to asymmetric matrices, which are far more difficult to handle than their symmetric counterparts. As a result, the key to many recent advances in directed graph algorithms is to first take things into the undirected setting using iterative methods, and then apply the much richer families of undirected graph approximations to that intermediate problem. The approach taken in this paper, of working with directed graphs throughout the algorithm, represents a drastically different alternative. It shows that with the right graph decomposition tools, one can stay entirely within the scope of directed graphs.

The combination of ideas in this nearly linear time negative weighted shortest path algorithm represents the culmination of several lines of work on developing better fundamental understandings of graph decompositions and shortest path structures. By making progress on one of the most intriguing and well-studied graph theoretic problems, the paper proposes a new way of designing efficient graph algorithms that will go significantly beyond shortest paths.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More