It was a major surprise when, in 2010, Andreas Björklund discovered what many previously thought impossible: a significantly improved algorithm for the famous *Hamiltonian path* problem, also known as Hamiltonicity.^{4} Hamiltonicity asks if a given graph contains a path that goes through each vertex exactly once, as illustrated in Figure 1.

### Key Insights

- There has recently been impressive progress—after nearly 50 years of stagnation—in algorithms that find solutions for certain hard computational problems, including the famous Hamiltonian problem.
- This progress is due to a few core ideas that have found several applications. A unifying theme is algebra: we “transform” the given problem into a more general algebraic format, then solve the corresponding algebraic problem that arises.
- This article walks the reader through some of these exciting developments and the underlying ideas. It also puts them in context with the discover process that led to them, highlighting the role of parameterization.

Hamiltonicity was one of the first problems shown to be NP-complete by Karp.^{20} The only known algorithms for NP-complete problems require time scaling *exponentially* with the size of the input. It is believed they cannot be solved much faster in general, although the possibility has not been ruled out.^{17} Given an undirected graph on *n* vertices, Björklund’s algorithm can find a Hamiltonian path or report that no such path exists in *O**(1.657^{n}) time.^{a} The algorithm still runs in exponential time but it is much faster than the *O**(2^{n}) running time of the previously fastest algorithm, known since the 1960s.^{3,19}

Hamiltonicity is a prominent algorithmic problem. Many researchers before Björklund have tried their hand at it, without success. But some of the tools Björklund used did not become available until 2009 thanks to progress in the *k*-path problem, a related problem in the context of *parameterized algorithms.*

**A parameterized race and a conjecture.** The *k*-path problem is a natural parameterized analogue of Hamiltonicity. The goal now is to find in the given graph a path of length *k* for some specified value of *k*, rather than a path of length *n.* It could be argued that, from a practical standpoint, the *k*-path problem is better motivated relative to Hamiltonicity. Algorithms designed specifically for the *k*-path problem have been actually used to detect signaling pathways in large protein interaction networks.^{29}

Most NP-complete problems have similar parameterizations, where the parameter *k* measures the *solution size* to the given instance. This is natural because in practice, the solution length to an NP-hard problem is often short relative to the length of the overall instance. Downey and Fellows^{12} observed that some parameterized problems appear to be “easier” than others. To capture that, they defined the class of *fixed parameter tractable* (FPT) problems. The definition of FPT is rather simple. The running time of a parameterized algorithm should clearly depend on the input size *n*, and the parameter *k.* But the dependence should be somewhat special: a problem is defined to be FPT if it can be solved by an algorithm in *O**(*f*(*k*)) time.

The FPT notion has proven to be extremely insightful and has led to a more detailed understanding of problem hardness. It is believed that not all NP-hard problems are FPT; a prominent example is the *k*-clique problem, where the goal is to detect a *k*-subset of the network nodes that are all pairwise connected (see Figure 2). The best known algorithm for *k*-clique is not much better than exhaustive search, which takes time *O*(*n*^{k}). However, experience shows that when a problem is shown to be FPT, algorithmic progress has just begun as it is often the case that the function *f*(*k*) in the running time can be improved.

The *k*-path problem is a prime example. The problem was shown to be FPT by Monien^{25} who described an algorithm running in *O**(*k*!) time. This was later improved to *O**((2*e*)^{k}) by Alon et al.^{2} and to *O**(4^{k}) by Chen et al.^{10} Each of these steps represents the introduction of a significant new idea. Similarly, for many FPT problems there are now *races* for faster parameterized algorithms,^{27} which have enriched the field of algorithm design with new techniques.

However, with each improvement, one is faced with the question: Is further progress possible? Known algorithms for the “parent” NP-complete problem can be a valuable guide to answering this question. This is because their running time sets a clear target for the parameterized algorithm. In the case of Hamiltonicity, we know of an algorithm that runs in *O**(2^{n}) time. It is then reasonable to conjecture that there is an algorithm for *k*-path that runs in *O**(2^{k}) time, “matching” the Hamiltonian path algorithms for the extreme value of the parameter.

**Roadmap.** The development of the algorithmic techniques we are about to discuss was largely driven by the *k*-path conjecture. But for the sake of simplicity we will illustrate them in the context of another important NP-complete pro blem known as the 3D-matching problem, which has also played a central role. In fact, the Hamiltonicity algorithm was preceded by another record-breaking algorithm for the 3D-matching problem, also due to Björklund.^{5}

In this article, our first step toward faster algorithms is what we call an *algebraization* of the combinatorial problem, that is, capturing the problem as a question about the existence of certain monomials in a polynomial. We discuss one algebraization of the 3D-matching problem later. Given the polynomial, the original combinatorial question becomes now an issue of *algebraic detection* of monomials; that is, extracting information from the polynomial by assigning values into its variables. Looking for assignments that fit our purpose will lead us to calculations modulo 2, over an algebra in which sums of pairs cancel out. This in turn will present us with a problem of *unwanted cancellations* of monomials. We will solve it with the method of *fingerprints* that augments the 3D-matching polynomial in order to ensure no unwanted cancellations occur. The *k*-path conjecture was answered in the positive via a generalization of algebraic detection with fingerprints, which also yielded many other faster parameterized algorithms. These ideas are reviewed later, as is Björklund’s key twist that helped unlock the faster Hamiltonicity algorithm. While still employing in a relaxed way the method of fingerprints, Björklund treated computation modulo 2 not just as a nuisance, but also as a resource, by presenting algebraizations where many *unwanted monomials cancel out*, because they come in pairs. In order to derive these sharper algebraizations that *exploit cancellations*, Björklund tapped into the power of algebraic combinatorics that study graphs via linear algebra. Faster algorithms for several other parameterized problems followed.

### Algebraization

Consider the following fictional airline scheduling problem: on any given day there is a set *X* of designated captains, a set *Y* of designated second officers, and a set *Z* of destinations. Each captain and second officer declare a number of preferred destinations. The airline would like to accommodate as many preferences as possible; but at the same time, they would like to match on the same flight a pilot and second officer who have flown together before. This creates a number of triples: if captain *x* has previously flown with second officer *y* and they would both like to fly to destination *z*, they define a triple {*x, y, z*}. Of course, any given captain, second officer or destination can appear in many triples. But to maximize utility, the airline would like to select a big matching, that is, a subset of triples that are pairwise disjoint; based on it they can then schedule the personnel. The NP-complete problem asks for a matching of maximum size, while the parameterized problem asks for a *k*-matching with at least *k* triples, which we also call a *k*-3D matching. An example of the problem is shown in Figure 3.

**A polynomial for 3D-matching.** It was probably understood by many before it was made explicit in Koutis^{21} that we can view the *k*-3D-matching problem through an algebraic lens. Let us explain how, by means of the example in Figure 3.

First, we view as a variable each element of the sets *X, Y, Z.* That is we have the variables *X* = {*x*_{1}, *x*_{2}, *x*_{3}}, *Y* = {*y*_{1}, *y*_{2}, *y*_{3}} and *Z* = {*z*_{1}, *z*_{2}, *z*_{3}, *z*_{4}}. Then for each triple we construct a monomial, by taking the product of the corresponding variables. In our example of Figure 3, we have the monomials.

We also define the *instance polynomial P*_{1} to be the sum of these monomials. Finally, we set &cacm5901_a;; we will call *P*_{k} the *k*th *encoding polynomial.* For instance, for the 2nd encoding polynomial, we have

To see the motivation behind these definitions, consider the expansion of *P*_{2} into a sum-product form.

The monomials in the sum-product expansion fall into two basic classes. In the solution part of the expansion, monomials that correspond to solutions of the problem are linear; the product does not contain squares or other high powers of variables. As an example, the monomial *x*_{2}*x*_{3}*y*_{1}*y*_{3}*z*_{1}*z*_{4} in Equation (2.1) is linear, and corresponds to selecting the second and fourth triples. In the complementary non-solution part, each nonlinear monomial corresponds to a “non-solution,” that is, a choice of non-disjoint triples.

This construction can be generalized to any instance of the problem; the only difference would be that if we want to check for a *k*-matching, we would have to look at the sum-product expansion of the polynomial *P*_{k}. To summarize:

**Observation.** *An instance of the 3D-matching problem contains a matching of size k if and only if the sum-product expansion of the kth encoding polynomial P*_{k} *contains a linear monomial.*

Therefore, to solve the 3D matching problem, it would suffice to expand the polynomial *P*_{k} into a sum of products and check if the sum contains a linear monomial. However, an *O**(2^{N}) *dynamic programming* algorithm is known, for *N* = |*X*| + |*Y*| + |*Z*|. On the other hand, the number of possible monomials in *N* variables is much larger than *O**(2^{N}). Thus, the simple idea of expanding *P*_{k} into a sum of products does not give a good algorithm.

**The dynamic programming algebra.** Dynamic programming often involves inductive definitions that are tedious to state; the analogue in our algebraic setup is perhaps more intuitive. It can be naturally viewed as a “truncated” expansion of the polynomial *P*_{k} into a sum-product form. More concretely, imagine expanding *P*_{k} slowly, one multiplication at a time. We observe that monomials containing squared variables can be thrown away as soon as they are formed, because they do not affect the presence of linear monomials in the full sum-product expansion of *P*_{k}. There are 2^{N} linear monomials in *N* variables. This implies the “truncated” expansion can be carried out in *O**(2^{N}) time. If there is at least one monomial left in the final truncated sum-product form of *P*_{k}, we can conclude the given instance contains a 3D matching of size *k.* Otherwise, it does not.

More formally, we are computing the sum-product expansion of *P*_{k} in an extended *dynamic programming algebra of polynomials* that has all the usual rules for multiplication and addition of polynomials, plus the additional rule that *all squared variables evaluate to zero.* Notice this implies that all nonlinear monomials also evaluate to zero. We can thus recast in algebraic terms our observation regarding linear monomials of *P*_{k}:

An instance of the 3D-matching problem contains a matching of size k if and only if the kth encoding polynomial P_{k}is not identical to zero in the dynamic programming algebra.

In general we can have *N* = 3*k*, since all elements of *X* ∪ *Y* ∪ *Z* could participate in a 3D matching. However, if we are merely looking for a 3D matching containing only *k* triples, then we might set as our target an *O**(2^{3k}) time parameterized algorithm.

**Parameterizing via assignments.** The goal here is to describe a first parameterized algorithm for the 3D-matching problem, which will demonstrate the use of assignments of the variables of the polynomial in order to extract information from it.

The problem with “parameterizing” the *O**(2^{N}) algorithm lies clearly in the number of variables *N.* Trying to deal with this problem Alon, Yuster and Zwick^{2} came up with a method known as color coding. Their solution can be understood as an assignment as follows. Going back to the earlier example, suppose we are interested in finding a matching of size 2. Then, as discussed, all linear monomials in the sum-product expansion of *P*_{2} have degree six; they are products of six distinct variables. Alon et al. proposed the introduction of a new set *W* containing exactly six variables (more generally *3k* variables for a *k*-matching). So, let

We then perform a random assignment of the variables in *W* into the variables in *X, Y, Z.* We can also think of this assignment as a random coloring of the *N* variables with *k* colors. Let us consider one such assignment:

By just substituting in Equation (2.1), *P*_{2}(*X, Y, Z*) becomes now a polynomial in the variables *W*:

It can be seen that monomials in *P*_{k}(*X, Y, Z*) get mapped to monomials in *P*_{k}(*W*). If a monomial in *P*_{k}(*X, Y, Z*) is not linear, the same holds for the corresponding monomial in *P*_{k}(*W*). In contrast, a linear monomial of *P*_{k}(*X, Y, Z*) may or may not survive as a linear monomial *P*_{k}(*W*). In our example all but one linear monomials are mapped to a linear monomial in *P*_{2}(*W*). So detecting a linear monomial in *P*_{k}(*W*) allows us to infer its presence in *P*_{k}(*X, Y, Z*) as well, and consequently the existence of a *k*-matching in our original problem. The inverse may be not true, as it may be the case that no linear monomial survives the assignment.

We can evaluate *P*_{k}(*W*) in the dynamic programming algebra as we did with *P*_{k}(*X, Y, Z*). Because now *W* contains only 3*k* variables, the evaluation takes *O**(2^{3k}) time. However, the probability that any given linear monomial survives through a random color coding assignment is very small; it can be calculated to be around *e*^{–3k} where *e* ≃ 2.72. This means one has to try *O*(*e*^{3k}) random assignments in order to have the monomial survive with an acceptable constant probability, independent from *k.* So, the overall running time is *O**((2*e*)^{3k}). This is still a factor of *e*^{3k} away from our *O**(2^{3k}) target.

### Algebraic Detection

The methods mentioned previously for detecting a linear monomial are essentially *combinatorial*, but we presented them in algebraic guise. Can we find an even faster algorithm if we use a genuinely algebraic method?

**A matrix assignment.** Earlier, we considered the idea of an assignment into the variables of the encoding polynomial *P* and its subsequent evaluation in the dynamic programming algebra. The dynamic programming algebra and the color-coding assignment are only one possibility. There are numerous possible algebras and assignments. Matrices in particular seem to offer rich possibilities. An attractive feature of matrix algebra is that the square of a matrix can be 0, offering a genuinely algebraic way of “implementing” the dynamic programming algebra, and specifically its rule that squares of variables should be zero. As a simple example:

This leads us to the following list of requirements for a set of random matrices to be used for detecting linear monomials of degree *k*, that is, products of *k* distinct variables.

- To simplify reasoning about monomials, the matrices must be pairwise commutative, that is, the order in which we multiply them should not affect the value of the product.
- The square of each matrix must be equal to zero. Along with commutativity, this implies that nonlinear monomials will evaluate to zero as well.
- Linear monomials of degree
*k*must “survive” the assignment, that is, evaluate to a non-zero matrix, with constant probability, say at least 1/4. It would be convenient if this non-zero matrix contains only ones in its diagonal. - Since we are looking for speed, the matrices must be as small as possible, in order to support fast evaluation. Ideally, matrix operations should take time
*O**(2^{k}) or less.

In Koutis^{22} an efficient randomized construction of such matrices was given, showing it is possible to satisfy all these requirements if we are willing to relax our notion of zero:

By “zero,” we now mean zero modulo 2

We will refer to this as the *The Mod-2 Matrix Fact.*

We would not discuss the technical details of how to prove the Mod-2 Matrix Fact, since all we need here is to understand its algorithmic power and consequences.

Comparing with the color coding method, we see that evaluation of the polynomial takes the same time, *O**(2^{k}) for detecting a linear monomial with *k* variables. The gain is in the probability that a linear monomial survives the assignment; from the exponentially small probability *e*^{–k}, we went to a constant probability of 1/4. This appears to suggest we got rid of the undesired *e*^{k} factor and have reached our target of detecting degree-*k* linear monomials in *O**(2^{k}) time and therefore a size *k* 3D matching in *O**(2^{3k}) time.

**Unwanted cancellations.** However, a more careful look reveals we are not done yet due to our relaxed notion of what is zero. It is clear the polynomial *P*(*X*) evaluates to zero if it does not contain linear monomials. However, the Mod-2 Matrix Fact does not guarantee *P*(*X*) evaluates to non-zero, even if some of its linear monomials evaluate to non-zero. The most significant source of this problem is not in the evaluation, but rather in the polynomial *P* itself: by taking a look at the sum-product expansion in Equation (2.1) we see that the solution part of *P* that consists of the linear monomials is already equal to 0 *mod* 2, just because the monomials are multiplied by 2.

**The method of fingerprints.** We tackle this problem of multiple copies of linear monomials by ensuring each monomial is unique, through the use of a set of “fingerprint” variables *A* = {*a*_{1}, *a*_{2}, …,}. We illustrate the idea on the *k*-3D-matching polynomial *P*_{2} which we will now define as follows:

All we are doing here is multiplying each occurrence of a triple in *P*_{2} with a “fresh” variable from the set *A.* Consider then what happens to the two copies of the monomial *x*_{2}*x*_{3}*y*_{1}*y*_{3}*z*_{1}*z*_{4}; they are now replaced by two separate linear monomials: *a*_{2}*a*_{8}*x*_{2}*x*_{3}*y*_{1}*y*_{3}*z*_{1}*z*_{4} and *a*_{6}*a*_{4}*x*_{2}*x*_{3}*y*_{1}*y*_{3}*z*_{1}*z*_{4}.

It appears the introduction of the auxiliary variables comes at the cost of getting linear monomials of higher degree, which would require larger matrices in order to survive through an evaluation, resulting in a slower algorithm.

However, an alternative idea is to use a different assignment for the *A* variables. For example, we can assign random {0, 1} values in the variables of *A.* In essence we wish to create an odd number of copies for the linear monomials, so that the solution part of the encoding polynomial is not trivially zero modulo 2. In Koutis^{22} it was shown this idea is enough for linear monomial detection to go through *in the 3D-matching case.* That is, by randomly assigning matrices to the variables *X, Y, Z*, and {0, 1} values to the fingerprint variables, the polynomial will evaluate to non-zero with constant probability if its sum-product expansion contains a linear monomial and always to zero otherwise.

**Target reached.** We have designed an *O**(2^{3k}) time algorithm for the *k*-3D-matching problem.

**A general algebraic framework.** We have come awfully close to a fast parameterized algorithm for a problem much more general than the *k*-3D-matching problem: that is the detection of degree-*k* linear monomials in the sum-product expansion of arbitrary polynomials *P*(*X*) that are given to us in some concise, “unexpanded” representation. The importance of this problem was established in Koutis^{22,24} where it was observed that a fast algorithm for it implies faster algorithms for several parameterized problems, including the *k*-path problem. So, what do we need to do in order to generalize the method?

All steps described here carry over to arbitrary polynomials, including generating unique linear monomials with an appropriate pre-processing of the polynomial and a placement of fingerprint variables in it. However, we are still stuck with the multiple copies problem because the random {0, 1} assignment to the fingerprint variables is not guaranteed to work for any polynomial. We will thus need to generalize to some other type of assignment that works in arbitrary situations, while still being compact and easy to handle computationally.

Here is the solution proposed in Williams.^{32} Imagine evaluating *P*(*X, A*) in two stages. First we assign random matrices &cacm5901_c; from the Mod-2 Matrix Fact into the *X* variables and compute *P*(&cacm5901_c;, *A*), leaving the *A* variables unevaluated. The result is a matrix whose diagonal is a polynomial *Q*(*A*). The Mod-2 Matrix Fact implies that *Q*(*A*) is zero modulo 2 if *P*(*X*) does not contain a linear monomial. If *P*(*X*) does contain a linear monomial then *Q*(*A*) is non-zero with probability at least 1/4. This consideration crystallizes the problem: we wish to be able to test whether *Q*(*A*) is identical to zero modulo 2, and we would like to do so by evaluating *Q*(*A*) on some “compact” assignment to its variables. This problem, however, has been studied and solved earlier. It is known as *identity testing.*

A solution to identity testing, known as Schwartz–Zippel Lemma,^{26} is simple and intuitive. The reason we picked the values {0, 1} in our assignment for the 3D-matching polynomial is that with them we can perform multiplication and addition that respects modulo 2 arithmetic. But there are other algebraic “fields” that allow us the same kind of operations, with the bonus fact they are larger, in the sense they contain more values. If instead of {0, 1} we pick a field consisting of *O*(*k*) values the number of possible assignments to the fingerprint variables by far outnumbers the number of possible roots of *Q*(*A*), that is, the number of assignments that make it evaluate to zero. Hence a random assignment from this larger field will result with high probability in *Q*(*A*) evaluating to a non-zero value in the field. This allows us to claim the following result.

**Fast linear monomial detection.** The problem of detecting a degree-*k* square-free monomial in an arbitrary polynomial *P*(*X*) can be solved in *O**(2^{k}) time.

### Breaking Barriers

A faster parameterized algorithm for linear monomial detection would have a tremendous impact, as it would imply faster algorithms not only for many parameterized problems, but also for the corresponding non-parameterized NP-hard problems. Thus, an intriguing and natural question is whether the problem can be solved faster by evaluating the input polynomial over a more exotic algebra supporting faster operations. Unfortunately, the question has been answered in the negative; it is impossible to find a better algebra.^{24}

This algebraic barrier suggests new techniques are required for further progress in the linear detection problem, if progress is possible at all. However, one can be more optimistic about specific problems. Linear monomial detection is very general and completely agnostic to combinatorial properties of the underlying problem, so taking advantage of specific problem properties may sometimes get around the algebraic barrier. On the other hand, attempting to make progress on well-studied NP-complete problems, such as the Hamiltonian path problem or the 3D-matching problem, one faces a perhaps more significant psychological barrier: a lack of progress in nearly 50 years.

In brilliant work, Andreas Björklund broke the psychological barrier with an *O**(2^{N/3}) time algorithm for the “exact” X3D-matching problem^{5} and an astonishing *O**(1.657^{n}) time algorithm for the Hamiltonicity.^{4} These running times were later almost matched by parameterized algorithms.^{6} The reader can find in Fomin et al.^{14} an excellent exposition of the algorithm for the Hamiltonian path problem for bipartite graphs.

Central to Björklund’s work are sharper algebraic tools that draw from the large pool of algebraic combinatorics (for example, Royle^{28}) to produce lower degree polynomials. More crucially, Björklund proposed the idea of relaxing the method of fingerprints in order to exploit cancellations of pairs of non-solution monomials. Here, we will see how these ideas got applied in the case of the X3D-matching problem.

**Counting weighted matchings Mod 2.** Consider a restriction of our 3D-matching problem to a 2D-matching problem in which we are given pairs consisting of a captain and a second officer, rather than triples. We will assume the two sets *X* and *Y* are of equal size *n.*

The problem has a graph representation as shown in Figure 4. A solution of *n* edges/pairs that covers all vertices of the graph is called a *perfect matching.*

The perfect matching problem can be solved in polynomial time. But *counting* the number of perfect matchings is known to be a hard problem; it is as hard as counting the number of solutions for any NP-complete problem.^{30} In fact, it is also hard to compute the number of perfect matchings modulo any prime *p*, except when *p* = 2. If *p* = 2 the number of perfect matchings modulo 2 is equal to the determinant of the incidence matrix of the bipartite graph.^{28} This key fact is going to play a significant role, because the matrix determinant can be computed with a polynomial number of arithmetic operations.

To define the incidence matrix *A*, we arbitrarily number the nodes in *X* with numbers from 1 to *n*, and do the same with the nodes in *Y*, as shown in Figure 4. The entry *A*(*i, j*) is 1 if node *i* ∈ *X* is connected to node *j* ∈ *Y*, and 0 otherwise.

Given this definition, if *N*_{2} denotes the number of perfect matchings, we have what we call the *Matching Lemma*:

We now consider an extension of the above lemma that will be particularly useful. The perfect matching counting problem has a natural generalization to edge-labeled multi-graphs, where (i) we allow multiple “parallel” edges between any two given nodes and (ii) each edge *e* is labeled with a unique monomial *l*_{e}.

The signature of a matching *μ* is defined to be the monomial formed by taking the product of the labels of the edges in the matching *μ.* For example, the signature of the matching in Figure 4 is *l*_{1,4}*l*_{2,1}*l*_{3,2}*l*_{4,3}. More generally,

Given a labeling of the edges we can extend *N*_{2} to a solution polynomial *N*_{2}(*L*), which is simply the sum of the signatures of all perfect matchings, such as if *M* is the set of perfect matchings in the graph, then

Finally, we can extend the incidence matrix *A* to a matrix *A*_{L} over *L* in a natural way: if nodes *i* and *j* are connected by one or more edges, we let *A*_{L}(*i, j*) be the *sum* of their labels. We have the following Generalized Matching Lemma:

**Monomial detection for X3D-matching.** We now return to the X3D-matching problem, which is the 3D-matching problem augmented with the constraint that |*X*| = |*Y*| = |*Z*| = *n.* In this case we want to decide if there is a solution that consists of exactly *n* = *N*/3 triples, that is, a solution that covers all elements of *X, Y, Z.*

The algorithm involves a transformation of the X3D matching problem into a labeled matching instance. The sets *X* and *Y* define the two parts of the bipartite graph. In order to determine the labels, we use two sets of variables, *Z* and *U.* Each destination corresponds to a distinct variable in *Z* and each triple corresponds to a distinct variable in *U.*

To see now how we form the labeled bipartite graph we give an example that illustrates the edges between two given vertices of the graph, along with their labels. Suppose the X3D-matching instance contains the triples

and no other triples with {*W, K*} as a captain and second officer. Also suppose we have associated with these triples the variables *u*_{1}, *u*_{2}, *u*_{3}, respectively. Then, nodes *W* and *K* will be connected by three edges, labeled respectively with the monomials *u*_{1}*z*_{SJU}, *u*_{2}*z*_{SFO}, and *u*_{3}*z*_{PIT}.

Consider now the polynomial *N*_{2}(*Z, U*) *mod* 2. Because each edge of the bipartite graph corresponds to a triple, any monomial of *N*_{2}(*Z, U*) *mod* 2 corresponds to the selection of *n* triples, because it is the signature of a matching. By definition of *N*_{2}(*Z, U*) *mod* 2 these *n* triples cover exactly and without overlaps the sets *X* and *Y.* In the case there is an overlap in destinations, the monomial contains a squared variable, corresponding to that destination. On the other hand if a set of triples forms a 3D matching, the monomial is linear with respect to the variables in *Z* because no destination appears twice. So, assigning the random matrices from the Mod-2 Matrix Fact to the variables in *Z* we zero-out the nonlinear monomials. In addition, we observe the coefficient of each monomial in *N*_{2}(*L*) *mod* 2 is 1, because the *U* variables specify each set of *n* triples. Thus the *U* variables are our fingerprint variables and we will assign to them values from an appropriate algebraic field of size *O*(*n*). With these assignments, the outcome of the evaluation will be non-zero with a good probability if and only if *N*_{2}(*L*) contains a monomial corresponding to a solution of the instance. Hence we can detect an exact 3D matching if there exists one. The running time of the algorithm is *O**(2^{n}) because the degree of *N*_{2}(*Z, U*) in terms of the *Z* variables is *n.*

**Determinants and cycles.** The monomials that are canceled when the determinant is computed modulo 2 as in Equation (4.2) correspond to cycle covers. A cycle is a path plus one edge that closes the loop. A cycle cover is a set of cycles in the graph such that each vertex participates in exactly one cycle. In particular, a Hamiltonian cycle, that is, a cycle containing all the vertices, is a cycle cover. A perfect matching is also a cycle cover albeit consisting only of “degenerate” cycles, that is, edges. Cycle covers that contain at least one non-degenerate cycle come in pairs basically because each such cycle can be traversed in two possible ways: clockwise and counterclockwise. Björklund’s Hamiltonicity algorithm, which actually targets Hamiltonian cycles rather than paths, breaks this symmetry by introducing direction to some edges in the graph. His algorithm still relies crucially on the remaining symmetries and cancellations. It remains open whether Hamiltonicity has a faster than *O**(2^{n}) time algorithm for general directed graphs.

### More Recent Advances

Since its appearance, the general framework of algebraic fingerprints with modulo computations has been used in the design of faster algorithms for several parameterized problems. Examples include: finding subgraphs that are more complicated than paths,^{16,31} finding functional motifs in biological networks,^{8,18,23} finding the shortest cycle through specified nodes in a given graph,^{7} and the repetition-free longest common subsequence problem for strings, with applications in computational biology.^{9} Among the many examples, we highlight remarkable progress in algorithms for NP-complete problems on graphs, parameterized by the so-called *treewidth* of the input graph.

Several NP-complete problems on graphs are tractable if the graph is a tree, that is, if it contains no cycles. The NP-completeness of graph problems is usually proved via the construction of very intricate graphs that are arguably artificial comparing to graphs arising in practical situations. In particular, real-world graphs often have a more tree-like structure.

The notion of treewidth offers a way to quantify how much a given graph “deviates” from a being tree. An example is given in Figure 5. The vertices of the graph are arranged in a tree structure. Each tree node lists copies of some of the graph vertices. Each edge of the graph connects two vertices that are listed together at some tree node. In addition, any given graph vertex can appear only in contiguous tree nodes. The treewidth *tw* of a graph is defined as the maximum number of graph vertices hosted in a tree node; in our example *tw* = 3. As a more general example, the natural class of *planar* graphs, that is, graphs that can be drawn on the plane without crossings has treewidth &cacm5901_d;, where *n* is the number of vertices.

Cygan et al.^{11} gave faster algorithms for many graph problems parameterized by treewidth. Previous algorithms had running time of the form *O**(*tw*^{tw}), while the new algorithms have dramatically improved running times of the form *O**(*c*^{tw}) for small constants *c;* for example the Hamiltonian path cycle can be now solved in *O**(4^{tw}) time.

### Open Problems

The algorithms in this article are all randomized; on any given run they have a small probability of reporting that an instance does not have a solution, when the opposite is true. But running them *O*(log(1/*p*)) times will make the overall probability of failure smaller than *p*, for any *p* > 0. The fastest known *deterministic* algorithm for the *k*-path problem requires *O**(2.85^{k}) time.^{15} Finding a deterministic algorithm that solves the problem in *O**(2^{k}) time remains an open problem. The way things stand, it would seem to require a deterministic version of the Schwartz–Zippel Lemma: a deterministic polynomial-time algorithm for testing whether a polynomial given as an arithmetic circuit is identically zero.

The method of algebraic fingerprints can be adapted to solve the weighted *k*-path problem; here the goal is to find a *k*-path of minimum total cost, assuming a cost is attached to every edge of the graph. The running time is *O**(2^{k}*W*) where *W* is the largest cost associated with an edge in the graph. On the other hand, dynamic programming can handle the problem in *O**(2*n* log *W*) time, when *k* = *n.* It is reasonable to conjecture that *k*-path has an *O**(2^{k} log *W*) algorithm. It should be noted though that, unlike the algebraic fingerprints method that can be implemented in memory of size polynomial in *n*, color coding for weighted *k*-path requires *O**(2^{k}) memory, which can be a very limiting factor in practice.

Dynamic programming can also be used to *count* the number of Hamiltonian paths. Counting the number of *k*-paths exactly is probably not FPT,^{13} but color coding can be used to count *k*-paths *approximately*, in *O**((2*e*)^{k}) time.^{1} This stands as the fastest known algorithm for this problem, but again it is reasonable to conjecture there is an *O**(2^{k}) algorithm for approximate counting.

Solving any of these open questions may require fresh ideas that will start a new cycle of discovery.

### Acknowledgments

I. Koutis is supported by NSF CAREER award CCF-1149048. R. Williams is supported by a David Morgenthaler II Faculty Fellowship, NSF CCF-1212372, a Sloan Fellowship, and a Microsoft Research Fellowship. The final phase of this work occurred at the Simons Institute for the Theory of Computing.

### Figures

Figure 2. A 3-path and a 3-clique. Finding a *k*-clique appears to be much more difficult than finding a *k*-path.

Figure 3. A set of triples and a matching. [Source: Wikipedia]

Figure 4. A perfect bipartite matching.

Figure 5. A graph and its treewidth decomposition. [Source: Wikipedia]

Figure. Watch the authors discuss their work in this exclusive *Communications* video. http://cacm.acm.org/videos/algebraic-fingerprints-for-faster-algorithms

## Join the Discussion (0)

## Become a Member or Sign In to Post a Comment