 # Communications of the ACM

Research highlights

# Technical Perspective: Every Graph Is Essentially Sparse

View as: Print Mobile App ACM Digital Library Full Text (PDF) In the Digital Edition Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook

The following paper by Batson, Spielman, Srivastava, and Teng surveys one of the most important recent intellectual achievements of theoretical computer science, demonstrating that every weighted graph is close to a sparse graph. As is often the case in key mathematical discoveries, a major part of the new contribution is a definition rather than just a theorem: here the authors describe an appropriate notion of "closeness" between graphs, called spectral similarity. This notion is fine enough so that graphs that are close to each other share certain properties which are crucial for a variety of algorithmic tasks, yet at the same time it can be argued that every graph is close to a graph which has few edges.

In the present (very general) context, an n-vertex weighted graph is nothing more than an n by n symmetric matrix G = (gij) consisting of nonnegative real entries, that is, gij = gji 0 for every i, j {1, ..., n}. The underlying combinatorial graph induced by G corresponds to its support: simply draw an edge joining a pair of vertices i,j {1, ..., n} if and only if the entry gij is strictly positive. This is a general template for modeling pairwise interactions between n items, where the quantity gij measures the intensity of the interaction between the ith item and the jth item.

A new structural paradigm for graphs would potentially impact numerous areas of research, since clearly graphs permeate computer science, mathematics, and the sciences in general. Therefore, at this level of generality, can it be possibly true that not all the important structural results for weighted graphs have already been discovered decades ago? The authors demonstrate through their remarkable work over the past years that the answer to this question is a resounding "yes."

Consider a graph G as a template for computing a certain "energy": whenever one assigns a real value xi to each vertex i, the graph outputs the quantity . As one ranges over all possible choices of real numbers x1, ..., xn, this quantity encodes a lot of useful information about the structure of the graph G. For example, by restricting to the special case when the xi are either 0 or 1, one recovers the entire cut structure of the graph, that is, for every subset S of the vertices, this quantity is nothing more than the total weight of the edges joining S and its complement (the size of the boundary of S in the graph G).

Given a small positive number , two graphs G = (gij) and H = (hij) on the same vertex set {1, ..., n} are said to be (1 + )-spectrally similar if the ratio between and is between 1 and 1 + for all choices of real numbers x1, ..., xn.

The multiyear effort of the authors yielded the following beautiful discovery: every graph G is (1 + )-spectrally similar to a graph H with at most 42n edges. Thus, the matrix H reproduces the quantities with high accuracy, and at the same time the average number of nonzero entries per row of H is bounded by a quantity that does not grow with n. The proof of this fact is constructive, yielding a deterministic polynomial time algorithm that takes G as input and outputs its sparse approximation H. For certain applications, one requires faster algorithms whose running time is nearly linear, and the authors show how to obtain such algorithms if one uses randomization and allows for output graphs H that have average degree that grows polylogarithmically with n.

Many applications of these results have already been discovered, and surely more will be found in years to come. The authors present some of these applications in the following article, including the design of a remarkable nearly linear time algorithm for approximate solution of diagonally dominant symmetric linear systems. Since the authors focus on applications to algorithm design, they do not describe the variety of mathematical applications of their work that have been discovered in recent years. For example, Barvinok recently used their work to approximate every centrally symmetric convex body by a polytope with few vertices.

The ideas and results detailed by Batson, Spielman, Srivastava, and Teng are important due to their generality and wide applicability. But they are a major intellectual achievement mainly due to the new ideas that are introduced in the proofs of their results. While related questions have been studied by mathematicians and computer scientists before, the method that the authors use is highly original and manifestly different from previous approaches. Their strategy to solve such questions introduced a new paradigm, and indeed rather than only using their results as a "black box," striking applications have been found by delving into their proof technique and adapting it to other problems (for example, this leads to a very original new approach to the important restricted invertibilty principle of Bourgain and Tzafriri, and yields new results on random matrices).

The work presented here exemplifies the best of modern research on theoretical computer science, as it reshapes our mathematical understanding and in tandem leads to the design of fast new algorithms.

### Author

Assaf Naor (naor@cims.nyu.edu) is a professor of mathematics in the Courant Institute of Mathematical Sciences at New York University.