A paper posted online in March 2023 has presented the first substantial progress in a half-century on one of the fundamental questions in the overlap between mathematics and computer science: how to recognize when two “groups”—basic algebraic structures—are really the same group in disguise.
Groups, which have been studied by mathematicians since the late 18th century, are among the most ubiquitous structures in mathematics. A group is a set of elements together with an operation (such as addition, multiplication, or composition) that turns two elements into a new one. To qualify as a group, the set must contain an “identity” element (one that leaves every element unchanged when combined with it by the operation); every element must have an inverse; and the operation (usually written *) must be associative, meaning that (a*b)*c always equals a*(b*c). There are, for example, groups of numbers or matrices or permutations, groups of symmetries of some object, and groups that measure the topological features of a shape.
A full classification of groups—even just the finite ones—is probably forever out of reach, said Bettina Eick, a mathematician at the Technical University of Braunschweig in Germany. A more attainable goal is to determine, for a given pair of finite groups, whether they are “isomorphic”—that is, whether there’s a way to match up their elements that respects the behavior of their respective group operations. This “group isomorphism” problem was formulated more than a century ago by the mathematician Max Dehn, who was interested in, among other things, using groups to distinguish between different knots. Group isomorphism is “a very central problem, that’s for sure,” Eick said.
It is always possible, in theory, to test whether two finite groups are isomorphic, but doing so may take an infeasible amount of time when the groups are large. The group isomorphism problem is not known to be in P, the set of problems with algorithms that run in a polynomial amount of time—what theoretical computer scientists think of as the “easy” problems.
From a computer science perspective, the runtime of group isomorphism algorithms connects with a host of other questions. The problem has been proposed as the basis for several cryptosystems that might withstand attacks from a quantum computer. And the intractability of the group isomorphism problem is a barrier to making progress on the closely related graph isomorphism problem, which strives to identify when two graphs (that is, networks) are the same.
Given two finite groups, it is simple to check whether a proposed isomorphism between them really is one. This means the group isomorphism problem belongs to the complexity class known as NP, consisting of problems whose solutions are easily checked. But group isomorphism occupies an unusual position within this class. It is neither known to be in P nor known to be NP-complete—the hardest kind of problem in NP. Instead, group isomorphism is one of just a few natural problems that, as far as we know, seem to sit somewhere in the gulf between these two extremes. “The problem is not just natural, but also pretty unique,” said Xiaorui Sun, a computer scientist at the University of Illinois Chicago.
Now, Sun has come up with a significantly faster algorithm for group isomorphism than the previous best one (though still not fast enough to put the problem in P). At present, his algorithm works only for a certain key subclass of groups, but computer scientists are optimistic they will be able to extend his insights to broader classes of groups.
“The ice has been broken,” said Laszlo Babai, a computer scientist at the University of Chicago. “After 50 years, we finally have something substantial to talk about.”
A Bottleneck
Given two groups, G and H, an isomorphism between them is simply a pairing of their elements (call it f) that respects the algebraic structure of the groups, so that if a * b = c in G, then f(a) * f(b) = f(c) in H. In other words, f transforms the “multiplication” table in G into the multiplication table in H.
One way to determine whether two finite groups are isomorphic is simply to check every possible pairing of their elements. This brute force approach takes an exponential amount of time, making it impractical for all but the smallest groups.
In the 1970s, however, computer scientist Robert Tarjan noted that you can speed things up by first constructing a list of “generators” for G—elements which, when multiplied together in different combinations, produce every other element of G. Then, instead of considering every mapping from G to H, you can consider every mapping of these generators into H, and see if any such mapping extends to an isomorphism. There’s always a generator list with at most about log(n) elements (where n is the number of elements of G), giving this algorithm a runtime of roughly nlog(n)—drastically speedier than the exponential algorithm, though modestly slower than a polynomial runtime.
Since that observation by Tarjan, computer scientists have been more or less stuck. They have managed to make some small improvements to the nlog(n) runtime, but essentially, “there’s been no progress for about half a century,” Sun said, even though “almost everyone in the field believes that there truly exists a faster algorithm.”
Any time you’re trying to compare two finite groups, there is a way to translate the problem into an analogous problem about comparing two graphs—in other words, the group isomorphism problem reduces to the graph isomorphism problem. That means that graph isomorphism is at least as hard to solve as group isomorphism. And indeed, for most of the history of the two problems, the best known algorithm for graph isomorphism was much slower than the best known algorithm for group isomorphism.
But in 2015, Babai managed to construct an algorithm for graph isomorphism whose runtime is nearly as fast as the nlog(n) runtime for group isomorphism. At this point, there is not that much room left for computer scientists to make further improvements on graph isomorphism unless they can speed up group isomorphism. “Our inability to improve group isomorphism testing remains a bottleneck for graph isomorphism,” Babai said.
The Hardest Cases
Sun’s paper has offered computer scientists a potential way through this bottleneck. He has come up with a group isomorphism algorithm whose runtime is roughly n(log(n))^5/6, a significant speedup over nlog(n). His approach applies not to all finite groups, but to a core collection of groups known as “p-groups of class 2 and exponent p.” Loosely speaking, these requirements mean (among other things) that the number of elements in the group is a prime number p raised to some power, and the group operation almost (but not quite) follows the commutative law, which says that a * b = b * a.
These might seem like esoteric restrictions, but p-groups of class 2 and exponent p play a central role in understanding the structure of finite groups as a whole. For one thing, these groups sit at the bottom of a natural hierarchy of all finite groups, offering the hope that methods developed for this case might be extendable to higher levels of the hierarchy.
Also, p-groups of class 2 and exponent p have historically stymied attempts at a fast group isomorphism algorithm. For groups where the operation is precisely commutative, the isomorphism problem is easy. And for groups that are far from commutative, it’s often possible to latch onto their complex structure to solve the isomorphism problem. But p-groups of class 2 and exponent p sit in an awkward trough between these two possibilities, lacking either the simplicity or the complexity that makes those other groups tractable.
“Many people believe these are the hardest cases of group isomorphism,” said Joshua Grochow, a computer scientist at the University of Colorado Boulder.
To deal with these tricky groups, Sun starts by invoking a correspondence discovered in the 1930s between groups of this type and certain spaces of matrices. That means constructing an isomorphism between two such groups boils down to constructing a similar correspondence between their associated matrix spaces. Previously, the fastest known method for comparing these matrix spaces was just slightly faster than examining all the possible maps between them; Sun figured out how to do better.
His method begins by using a preexisting technique called “individualization and refinement” to create for each matrix a smaller matrix that serves as a sort of label for it. Others had tried this type of technique before, but they all ran into the same stumbling block: if you make the “labels” small enough to be able to compare them efficiently, then there aren’t enough of them to give each large matrix a different label. To deal with this difficulty, Sun had to figure out how to handle “low rank” matrices in the matrix spaces—matrices that, thought of as linear transformations, compress high-dimensional spaces into much lower-dimensional spaces. One of his key insights was how to make the low-rank portion of these matrix spaces more organized.
“This method feels really new,” Grochow said.
Sun’s result is a purely theoretical advance—it gives speed guarantees as the size of the group approaches infinity but may not beat out existing algorithms in the size regimes that arise for mathematicians using software packages to compare groups. Nevertheless, algorithms with new structural insights, such as Sun’s, often seem to perform better in practice than their guarantees give them a right to, Babai said. If Sun’s algorithm does work well in practical settings, Eick said, it should be possible to embed it in other practical algorithms to cover a much broader category of groups than the one Sun studied.
Computer scientists are hastening to extend Sun’s theoretical result. Already, Grochow and Youming Qiao, of the University of Technology Sydney, have figured out how to loosen the “class 2” restriction, and researchers hope to go farther, gradually extending Sun’s result up much of the hierarchy of groups.
“The new energy Sun’s work has unleashed is… palpable,” Babai said.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment