Research and Advances
Computing Applications India Region Special Section: Big Trends

Theory Research in India: 2019–2022

Posted
  1. Introduction
  2. Automata Theory and Verification Research
  3. Computational Complexity Research
  4. Algorithms Research
  5. References
  6. Authors
swirling image

The deep connections between logic and automata theory have led to extensive applications in formal specification and verification of systems. In recent years, research has focused on extensibility of such techniques to infinite-state systems. This has led to developing theories of transformations, applications to verification of timed/recursive/concurrent/probabilistic systems, database theory, distributed algorithms, programs running under weak memory models and cryptographic protocols. Logic has also been the ground for foundational research, revisiting classical model theory from computational perspectives. This has led to results in Skolem Löwenheim theorems in the finite, synthesis of Boolean functions and Skolem functions, definability in first-order theories of graphs, algebraic characterizations, block products of algebraic structures, and decidable fragments of first-order modal logics.

The last few years has been a productive period for research in the field of computational complexity in India. The topics in which significant research has been done include algebraic complexity theory, communication complexity, research on codes and expanders arising out of connections to probabilistically checkable proofs, and the dynamic complexity of reachability. Notably, in July 2021, a break-through was achieved in the field of algebraic complexity, showing the first superpolynomial lower bounds against constant-depth arithmetic circuits over fields of characteristic zero or large characteristic. This lower-bound result also yields the first deterministic sub-exponential time polynomial identity testing algorithm for constant-depth arithmetic circuits via the unconditional construction of pseudo-random generators based on the lower bound for an explicit polynomial.

The algorithmic paradigms at the forefront of exciting new research in the field of algorithms in India include, but are not limited to, approximation algorithms, randomized algorithms, computational geometry, online algorithms, dynamic algorithms, parameterized algorithms, algorithmic game theory, and theoretical foundation of machine learning and big data. Some of the exciting results include constant factor approximation algorithm for finding a maximum independent set of rectangles, improved bounds for perfect sampling of k-coloring in graphs, an optimal FPT-approximation scheme for k-way cut, and a new online algorithm for caching.

The leading institutes where such research has been conducted include Chennai Mathematical Institute, Indian Institute of Science, Indian Institute of Technology Bombay, Indian Institute of Technology Goa, Indian Institute of Technology Kanpur, Indian Institute of Technology Delhi, and the Institute of Mathematical Sciences. Here, we explore some of the results of this research.

Back to Top

Automata Theory and Verification Research

Automata theory and formal languages. In the area of formal languages and automata theory, Indian researchers have contributed toward finding alternate characterizations of interesting language classes by means of logic, expressions, and algebra. We highlight a seminal result that presents regular expressions for transducers.

Transducer expressions. A finite state transducer is an extension of a finite state automaton with outputs and recognizes transformations. Regular functional word transductions have different machine characterizations including streaming string transducers, two-way transducers as well as logical characterizations. In 2018, Vrunda Dave, Paul Gastin, and S. Krishna (IIT Bombay) proposed regular transducer expressions (RTE) to capture regular transductions over finite and infinite strings.8 At the level of transformations, this is the analogue of the classical Kleene theorem in automata theory that says regular expressions capture regular languages. Unlike finite state automata, the ability to move in both directions (two-wayness) increases expressiveness and complexity in transducers, making this equivalence with respective expressions non-trivial.

While the machine model is two-way, the corresponding expression must be computed in a one-way manner. This is achieved by viewing the problem algebraically and defining a transition monoid of the two-way transducer. The transition monoid compactly summarizes the runs of the transducer in all directions, between any pair of states, and helps to unambiguously “factorize” in a oneway manner, the runs of the two-way transducer. The building blocks of the RTE are simple functions describing outputs of individual transitions, and are merged using suitable combinators that unite the factorized blocks of a run. The algebraic approach seamlessly provides the equivalence between RTE and deterministic two-way transducers both over finite and infinite words.

In formal language theory, aperiodic languages are a proper subclass of regular languages, and a seminal result of Schützenberger states that star-free regular expressions recognize precisely aperiodic languages. Unlike regular expressions, star-free expressions allow negation; in the setting of functions, the corresponding analogue is unclear. In 2021, Krishna, Gastin, and Luc Dartois showed the analogue of Schutzenberger’s result for transformations.9 Corresponding to aperiodic two-way transducers, they propose transducer expressions in which the use of Kleene-star is restricted. The Kleene star is allowed only on expressions, which are aperiodic and have bounded synchronization delay. Recently they have proposed an efficient algorithm for translation between transducers and expressions.7 Their algorithm is the first known one which has an elementary complexity.

Formal verification. In the area of formal verification, there has been contributions from India in terms of formal models and verification algorithms for different settings. Formal models have been introduced for some classes of database-driven systems, distributed algorithms, and memory models, among others. New verification algorithms haven been proposed for classes of {Timed, Probabilistic, Multi-pushdown, Message-passing} models of automata. We elaborate here a bit on one of our favorites:

A well-structured model for persistent Intel x86. An impressive result by Abdulla et al. is a formal model for the Persistent Intel x86 architecture. Though not the first formal model for Persistent Intel x86, this one has the property of being well structured. While the model gives rise to infinite state space, well structuredness gives a well quasi ordering between the infinite states, which guarantees that there will not be infinite descending chains. This crucial property can be exploited to obtain terminating verification algorithms. Thanks to this, the authors show that verifying safety properties is decidable. K. Narayan Kumar (CMI, Chennai) and Prakash Saivasan (IMSc, Chennai) are co-authors of this work.1

Back to Top

Computational Complexity Research

The research themes include algebraic complexity theory, communication complexity, codes, expander graphs and their connections to probabilistically checkable proofs, and the dynamic complexity of reachability. We highlight two excellent recent results in the field of algebraic complexity:

  • In July 2021, Nutan Limaye (IIT, Bombay), Srikanth Srinivasan (IIT Bombay), and Sebastian Tavenas (Savoie University) obtain a break-through in algebraic complexity theory by showing the first superpolynomial lower bounds against constant-depth arithmetic circuits over fields of characteristic zero (or of large characteristic). This paper was subsequently presented at this year’s IEEE 2021/2022 Symposium on Foundations of Computer Science and won the best paper award.27
  • About six months prior to that, Arkadev Chattopadhyay (TIFR, Mumbai), Rajit Datta (CMI), and Partha Mukhopadhyay (CMI) showed an exponential monotone arithmetic circuit complexity lower bound for a polynomial that has depth-3 arithmetic circuits of polynomial size6 at the 2021 Symposium on Theory of Computing.

Arithmetic circuits and lower bounds. An arithmetic circuit is a directed acyclic graph (DAG). Its nodes are gates. Each in-degree zero node is an input gates labeled by a variable xi or scalar. Each internal gate has in-degree 2 and is labeled + or X (sum or product gate). Clearly, each gate computes a polynomial and the output gate produces the polynomial computed by the circuit (for example, see the accompanying figure).

uf1.jpg
Figure. An example of an arithmetic circuit.

The circuit size is the number of gates. Its depth is the length of the longest input to output path. The circuit in the figure has size 6 and depth 2. In 1979, Valiant defined the algebraic analogues of P and NP, denoted VP and VNP. where VP stands for families of polynomials that have polynomial size and polynomial-degree arithmetic circuits and VNP contains easily described but potentially hard polynomials. He also raised the VP vs. VNP problem and suggested they could be easier to separate than P and NP. However, while superlinear lower bounds for arithmetic circuits computing explicit polynomials were easy to show (Baur and Strassen, 1983), superpolynomial lower bounds for constant-depth circuits in the Boolean setting was shown in the 1980s (Furst, Sax, and Sipser in 1983 followed by Hästad in 1986 showing optimal lower bounds for the parity function).

Lower bounds for constant-depth arithmetic circuits. We outline Limaye et al.’s result. Consider the iterated matrix product X1 X2 ··· Xd where each X1 is an n × n matrix of distinct variables Xijk 1 < j, k < n. The (1, 1)th entry of this matrix product is the explicit polynomial Pn,d, which is homogeneous of degree d. They show that for d = o(log n) the size of any (constant) depth c arithmetic circuit computing Pn,d has size at least ndε for a constant ε > 0 that depends on c. A surprising aspect is the proof is based on the so-called partial derivative method, due to Nisan and Wigderson in 1995, which is a standard method for many of the arithmetic circuit lower bounds and in recent years believed to be inadequate for arithmetic circuits of depth more than 3. It is known that Pn,d has “large” dimensional partial derivative space. On the other hand, exploiting the circuit structure, Limaye et al. show the partial derivative space for any polynomial computed by a size s circuit has “small” dimension (as a function of s). Combining the two yields the lower bound on s.

The circuit structure they exploit is set-multilinearity. The polynomial Pn,d is homogeneous of degree d, and it is set multilinear: each nonzero monomial has exactly one variable from each Xi. A set-multilinear circuit for Pn,d is one in which each gate computes some set-multilinear polynomial (in a subset of the variable sets X1 X2 ··· Xd). Their lower bound result has two components:

  • Any depth c size s arithmetic circuit computing a degree P set-multilinear polynomial P can be transformed into a depth 2c size poly(s)dO(d) set-multilinear circuit for P.
  • Depth c set-multilinear circuits for Pn,d, for d = O(log n), requires ndε for some constant ε > 0 depending on c.

Lower bounds for monotone arithmetic circuits. The result of Chattopadhyay et al. concerns monotone polynomials and monotone arithmetic circuits. A multivariate polynomial with rational (or real) coefficients is monotone if all nonzero coefficients are positive. A monotone arithmetic circuit is allowed to use only positive numbers as scalars. Thus, monotone circuits compute monotone polynomials. The power of “non-monotone” computation in arithmetic circuits was first studied in 1980 by Valiant (who else?) and he showed an explicit monotone n-variate polynomial that requires exponential size monotone circuits but can be computed by polynomial size circuits if allowed subtractions. Chattopadhyay et al. show an explicit n-variate monotone polynomial that has depth three polynomial size arithmetic formulas but requires exponential size monotone arithmetic circuits. An added dimension to their paper is the use of communication complexity techniques to prove the lower bound.

Back to Top

Algorithms Research

The last few years have seen some exciting new research in the field of algorithms in India. As noted earlier, the algorithmic paradigms at the forefront for such research include approximation algorithms, randomized algorithms, computational geometry, online algorithms, dynamic algorithms, parameterized algorithms, algorithmic game theory and theoretical foundation of machine learning and big data, among others. We highlight a few excellent recent results in the field of algorithms.

A k-coloring of a graph, G = (V, E), is an assignment of colors from the set [k] = {1, 2, …, k} to the vertices so that adjacent vertices are assigned different colors. That is, it is a function f: V → [k], so that for any edge uvE, we have that f(u) ≠ f(v). Bhandari and Chakraborty4 considered the problem of randomly sampling colorings of a given graph. The input is a graph G and an integer k: goal is to generate a G-coloring uniformly at random from the set of all k-colorings of G. That is, let G be the set of all functions f: V → [k], so that for any edge uvE, we have that f(u) ≠ f(v), then the objective is to output a function gG with probability cacm6511_a.gif . The problem of sampling k-colorings has several implications in theoretical computer science and statistical mechanics. For example, in statistical mechanics, sampling proper colorings is central to simulation-based studies of phase transitions and correlation decay.

The problem is computationally tractable if we are allowed significantly more colors than the maximum degree Δ of the graph. The more colors we are allowed, the easier it appears to be to produce a random k-coloring. Indeed, if k is much smaller than Δ, it is NP-hard to even determine whether a valid k-coloring exists. Sampling algorithms, therefore, typically require a lower bound on k in terms of Δ to guarantee efficiency. There has been a steady stream of works that have progressively reduced the lower bound on k in terms of Δ. Bhandari and Chakraborty4 presented a randomized algorithm that takes as input an undirected n-vertex graph G with maximum degree Δ and an integer k > 3Δ and returns a random proper k-coloring of Δ. The distribution of the coloring is perfectly uniform over the set of all proper k-colorings; the expected running time of the algorithm is poly(k, n) = Õ (n Δ2 · log(k)). This improved upon a result of Huber (STOC 1998) who obtained a polynomial time perfect sampling algorithm for k > Δ2 + 2Δ. Prior to this work, no algorithm with expected running time poly(k, n) was known to guarantee perfectly sampling with sub-quadratic number of colors in general. This paper appeared at STOC 2020, where Bhandari and Chakraborty received the Danny Lewin Best Student Paper Award.


In the area of formal verification, there has been contributions from India in terms of formal models and verification algorithms for different settings.


Here, we briefly note two other results of great importance: In the Maximum Independent set of Rectangles (MISR) problem, we are given a set of axis-parallel rectangles in the plane and the goal is to select a subset of non-overlapping rectangles of maximum cardinality. The problem was known to admit a polynomial time approximation scheme running in quasi-polynomial time, however in polynomial time finding even a constant factor approximation algorithm was well known open problem for decades. In a recent breakthrough, Mitchell32 obtained the first constant factor approximation algorithm for MISR. His algorithm achieves an approximation ratio of 10. In a recent paper, Khan,11 together with his colleagues, designed (2 + ε)-approximation algorithm for MISR based on a recursive partitioning scheme.

Kumar,21 together with his colleagues, considered two generalizations of the classical weighted paging problem that incorporate the notion of delayed service of page requests. The first is the (weighted) Paging with Time Windows (PAGETW) problem, which is like the classical weighted paging problem except that each page request only needs to be served before a given deadline. This problem arises in many practical applications of online caching, such as the “deadline” I/O scheduler in the Linux kernel and video-on-demand streaming. The second, and more general, problem is the (weighted) (PAGED) problem, where the delay in serving a page-request results in a penalty being assessed to the objective. This problem generalized the caching problem to allow delayed service, a line of work that has recently gained traction in online algorithm. Authors give O(log k log n)-competitive algorithms for both the PAGETW and PAGED problems on n pages with a cache of size k. This significantly improves on the previous best bounds of O(k) for both problems. This paper appeared in STOC 2020.

Worst-case running time analysis has been at the center of nearly all developments in theoretical computer science since the inception of the field. Nevertheless, this approach to measuring algorithm efficiency has its own limitations. It is never the case the input size is the only aspect of the input instance that affects the running time of an algorithm. Further, it is rarely the case that the input instances we want to solve look like the instances on which the algorithm performs the worst. In particular, the real-world instances are not worst-case instances; they exhibit additional structure that can often be exploited algorithmically. Thus, there is a real need for a mathematical framework that allows us to express the running time of algorithms in terms of both input size and the structural properties of the input instances. A particularly successful attempt at creating such a mathematical model is the field of Parameterized Complexity. This area has bloomed greatly in India, over the last two decades.


There is a real need for a mathematical framework that allows us to express the running time of algorithms in terms of both input size and the structural properties of the input instances.


The goal of Parameterized Complexity is to find ways of solving NP-hard problems more efficiently than brute force—the aim is to restrict the combinatorial explosion to a parameter that is hopefully much smaller than the input size. Formally, a parameterization of a problem is assigning an integer k to each input instance and we say that a parameterized problem is fixed-parameter tractable (FPT) if there is an algorithm that solves the problem in time f(k)|I|O(1), where |I| is the size of the input and d is an arbitrary computable function depending on the parameter k only. There is also a theory of hardness that allows us to show that certain parametrized problem is not amenable to this approach. We highlight a result about Graph Isomorphism in this area that include authors from Indian institutes.

In graph theory, an isomorphism of graphs G and H is a bijection between the vertex sets of and G and H, f: V(G) → V (H) such that any two vertices u and v of G are adjacent in G if and only if f(u) and f(v) are adjacent in H. In the Graph Isomorphism problem, given two graphs G and H, the objective is to test whether G and H are isomorphic. The Graph Isomorphism problem is arguably the most widely known problem whose membership in P is unknown, but which is not believed to be NP-hard. After decades of research, a quasi-polynomial time algorithm was proposed by Babai in 2015.2

While the existence of a polynomial-time algorithm on general graphs is still elusive, the complexity of Graph Isomorphism has been well understood on several classes of graphs, where structural properties of graphs in question have been used to design polynomial-time procedures solving the problem. Classic results in this area include an no(d)-time algorithm on graphs of maximum degree d,3,30 a polynomial-time algorithm for planar graphs,22,23,24,36 an no(g)-time algorithm on graphs of Euler genus g,10,31 an O(nk+4.5)-time algorithm for graphs of treewidth k,5 an nO(k)-time algorithm for graphs of rankwidth k,16,19 an nf(|H|)-time algorithm for graphs excluding a fixed graph H as a minor,35 and an nf(|H|)-time algorithm for graphs excluding a fixed graph H as a topological minor14 (where f is some computable function).

In all the results mentioned here, the degree of the polynomial bound on the running time depends on the parameter—maximum degree, genus, treewidth, rankwidth, or the size of the excluded (topological) minor—in at least a linear fashion. Since the parameter can be as high as linear in the size of the graph, for large values of the parameter the running time bound of the quasi-polynomial-time algorithm of Babai,2 which works on general graph, is preferable. During the last few years, there has been several successful attempts of bridging this gap by using the group-theoretic approach of Babai in conjunction with structural insight about considered graph classes. This led to algorithms with running time of the form npolylog(p), where p is any of the following parameters: maximum degree,17 Euler genus,33 treewidth,37 and the size of a fixed graph H excluded as a minor.20 We refer to a recent survey15 for an excellent exposition.

A parallel line of research is to turn the aforementioned algorithms into fixed-parameter algorithms for the parameters in question. That is, instead of a running time bound of the form nf(p) for a computable function f and a parameter p, we would like to have an algorithm with running time bound f(p) · nc for a universal constant c. In other words, the degree of the polynomial governing the running time bound should be independent of the parameter; only the leading multiplicative factor may depend on it. In this line of research, Lokshtanov et al.28 developed an FPT algorithm for Graph Isomorphism parameterized by treewidth. This result has been subsequently improved and simplified,18 as well as used to give a slice-wise logspace algorithm. In 2015, Kawarabayashi25 announced an FPT algorithm for Graph Isomorphism parameterized by the Euler genus of the input graph, with a linear dependency of the running time on the input size. Very recently, Neuen34 proposed a different and simpler algorithm for this case, which runs in time 2O(g4log g), for some constant c. The recent survey15 mentions obtaining FPT algorithms with parameterizations by the size of an excluded minor, maximum degree, and the size of an excluded topological minor as important open problems. (Note that the last parameter, the size of an excluded topological minor, generalizes the other two.)

Lokshtanov et al.29 essentially solve the first of these open problems. Graph Isomorphism in graphs excluding a fixed graph H as a minor can be solved by an algorithm working in time f(H) · nO(1), where f is some function. In other words, it shows these problems are fixed-parameter tractable when parameterized by the size of the excluded minor. The underlying approach is based on decomposing the graph in a canonical way into unbreakable (intuitively, well-connected) parts, which essentially provides a reduction to the case where the given H-minor-free graph is unbreakable itself. This is complemented by an analysis of unbreakable H-minor-free graphs, which reveals that every such graph can be canonically decomposed into a part that admits few automorphisms and a part that has bounded treewidth. This paper appeared in STOC 2022.29

We also mention another result in brief. In the MIN k-CUT problem, input is an edge weighted graph G and an integer k, and the task is to partition the vertex set into k non-empty sets, such that the total weight of the edges with endpoints in different parts is minimized. When k is part of the input, the problem is NP-complete and hard to approximate within any factor less than 2. Lately, the problem had received significant attention from the perspective of parameterized approximation, and several algorithms were presented at the top venues of theoretical computer science. In this paper, authors give a parameterized approximation algorithm with best possible approximation guarantee, and best possible running time dependence on said guarantee (up to Exponential Time Hypothesis (ETH) and constants in the exponent). In particular, for every ε > 0, the algorithm obtains a (1 + ε)-approximate solution in time (k/ε)O(k)nO(1). The paper appeared at FOCS 2020.26

    1. Abdulla, P.A., Atig, M.F., Bouajjani, A., Kumar, K.N. and Saivasan, P. Deciding reachability under persistent x86-TSO. In Proceedings of 2021 ACM Program. Lang. 5, 1–32.

    2. Babai, L. Graph isomorphism in quasipolynomial time. In Proceedings of the 48th Annual ACM SIGACT Symp. Theory of Computing, (2016), 684–697.

    3. Babai, L. and Eugene M. Luks, E.M. Canonical labeling of graphs. STOC, 1983, 171–183.

    4. Bhandari, S. and Chakraborty, S. Improved bounds for perfect sampling of k-colorings in graphs. In Proceedings of the 52nd Annual ACM SIGACT Symp. Theory of Computing (Chicago, IL, USA, June 22–26, 2020), 631–642.

    5. Bodlaender, H.L. Polynomial algorithms for Graph Isomorphism and Chromatic Index on partial k-trees. J. Algorithms 11, 4 (1990), 631–643.

    6. Chattopadhyay, A., Datta, R. and Mukhopadhyay, P. Lower bounds for monotone arithmetic circuits via communication complexity. In Proceedings of the 53rd Annual ACM SIGACT Symp. Theory of Computing, (Virtual Event, Italy, June 21–25, 2021), 786–799.

    7. Dartois, L., Govind, G.R. and Krishna, S.N. Efficient construction of reversible transducers from regular transducer expressions. LICS 2022.

    8. Dave, V., Gastin, P. and Krishna, S.N. Regular transducer expressions for regular transformations. In Proceedings of the 33rd Annual ACM/IEEE Symp. Logic in Computer Science, (Oxford, U.K., July 09–12, 2018) 315–324.

    9. Dartois, L., Govind, G.R. and Krishna, S.N. SD-regular transducer expressions for aperiodic transformations. In Proceedings of the 36th Annual ACM/IEEE Symp. Logic in Computer Science, (Rome, Italy, June 29 – July 2, 2021), 1–13.

    10. Filotti, I.S. and Mayer, J.N. A polynomial-time algorithm for determining the isomorphism of graphs of fixed genus. STOC, (1980), 236–243.

    11. Gálvez, W. et al. A 3-approximation algorithm for maximum independent set of rectangles. In Proceedings of the 2022 ACM SIAM Symp. Discrete Algorithms, (Virtual Conf., Alexandria, VA, USA, Jan. 9–12, 2022), 894–905.

    12. Grohe, M., Kawarabayashi, K., Marx, D., and Wollan, P. Finding topological subgraphs is fixed-parameter tractable. STOC, 2011, 479–488.

    13. Grohe, M., Kawarabayashi, K., and Reed, B.A. A simple algorithm for the graph minor decomposition—logic meets structural graph theory. In Proceedings of the 24th Annual ACM-SIAM Symp. on Discrete Algorithms, 2013, 414–431.

    14. Grohe, M. and Marx, D. Structure theorem and isomorphism test for graphs with excluded topological subgraphs. STOC, 2012, 173–192.

    15. Grohe, M. and Neuen, D. Recent advances on the graph isomorphism problem. CoRR, 2020; abs/2011.01366.

    16. Grohe, M. and Neuen, D. Isomorphism, canonization, and definability for graphs of bounded rank width. Commun. ACM 64, 5 (May 2021), 98–105.

    17. Grohe, M. and Neuen, D. and Schweitzer, P. A faster isomorphism test for graphs of small degree. In Proceedings of the 59th IEEE Annual Symp. Foundations of Computer Science, 2018, 89–100.

    18. Grohe, M. and Neuen, D. Schweitzer, P. and Wiebking, D. An improved isomorphism test for bounded-tree-width graphs. ACM Trans. Algorithms 16, 3 (2020), 34:1–34:31.

    19. Grohe, M. and Schweitzer, P. Isomorphism testing for graphs of bounded rank width. In Proceedings of the IEEE 56th Annual Symp. Foundations of Computer Science, (2015), 1010–1029.

    20. Grohe, M., Wiebking, D., and Neuen, D. Isomorphism testing for graphs excluding small minors. In Proceedings of the 61st IEEE Annual Symp. Foundations of Computer Science, 2020, 625–636.

    21. Gupta, A., Kumar, A. and Panigrahi, D. Caching with time windows. In Proceedings of the 52nd Annual ACM SIGACT Symp. Theory of Computing, (Chicago, IL, USA, June 22–26, 2020), 1125–1138.

    22. Hopcroft, J.E. and Tarjan, R.E. Isomorphism of planar graphs. Complexity of Computer Computations, (1972), 131–152.

    23. Hopcroft, J.E. and Tarjan, R.E. A v log v algorithm for isomorphism of triconnected planar graphs. J. Comput. Syst. Sci. 7, 3 (1973), 323–331.

    24. Hopcroft, J.E and Wong, J.K. Linear time algorithm for isomorphism of planar graphs. STOC, 1974, 172–184.

    25. Kawarabayashi, K. Graph isomorphism for bounded genus graphs in linear time. CoRR 2; abs/1511.02460.

    26. Limaye, N., Srinivasan, S. and Tavenas, S. Superpolynomial lower bounds against low-depth algebraic circuits. In Proceedings of the 62nd IEEE Annual Symp. Foundations of Computer Science, (Denver, CO, USA, Feb. 7–10, 2022), 804–814.

    27. Lokshtanov, D., Saurabh, S. and Surianarayanan, V. A parameterized approximation scheme for min k-cut. In Proceedings of the 61st IEEE Annual Symp. Foundations of Computer Science, (Durham, NC, USA, Nov. 16–19, 2020), 798–809.

    28. Lokshtanov, D., Pilipczuk, M., Pilipczuk, M. and Saurabh, S. Fixed- parameter tractable canonization and isomorphism test for graphs of bounded treewidth. SIAM J. Comput. 46, 1 (2017), 161–189.

    29. Lokshtanov, D., Pilipczuk, M., Pilipczuk, M. and Saurabh, S. Fixed- parameter tractability of graph isomorphism in graphs with an excluded minor. In Proceedings of the 54th Annual ACM SIGACT Symp. Theory of Computing, 2022.

    30. Luks, E.M. Isomorphism of graphs of bounded valence can be tested in polynomial time. J. Comput. Syst. Sci. 25, 1 (1982), 42–65.

    31. Miller, G.L. Isomorphism testing for graphs of bounded genus. STOC, 1980, 225–235.

    32. Mitchell, J.S.B. Approximating maximum independent set for rectangles in the plane. In Proceedings of the 62nd IEEE Annual Symp. Foundations of Computer Science, (Denver, CO, USA, Feb. 7–10, 2022), 339–350.

    33. Neuen, D. Hypergraph isomorphism for groups with restricted composition fac- tors. In 47th Intern. Colloquium on Automata, Languages, and Programming 168, 2020. Schloss Dagstuhl—Leibniz-Zentrum für Informatik, 88:1–88:19.

    34. Neuen, D. Isomorphism testing parameterized by genus and beyond. In Proceedings of the 29th Annual European Symp. Algorithms 204, 2021. Schloss Dagstuhl—Leibniz-Zentrum für Informatik, 72:1–72:18

    35. Ponomarenko, I.N. The isomorphism problem for classes of graphs closed under contraction. J. Soviet Mathematics 55, 2 (1991), 1621–1643.

    36. Weinberg, H. A simple and efficient algorithm for determining isomorphism of planar triply connected graphs. Circuit Theory 13 (1966), 142–148.

    37. Wiebking, D. Graph isomorphism in quasipolynomial time parameterized by treewidth. In 47th Intern. Colloquium on Automata, Languages, and Programming 168, 2020. Schloss Dagstuhl–Leibniz-Zentrum für Informatik, 103:1–103:16.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More