Research and Advances
Computing Applications Research highlights

Hacking Nondeterminism with Induction and Coinduction

Posted
Hacking Nondeterminism with Induction and Coinduction, illustration
  1. Abstract
  2. 1. Introduction
  3. 2. Deterministic Automata
  4. 3. Nondeterministic Automata
  5. 4. Antichain Algorithms
  6. 5. Conclusion
  7. Acknowledgments
  8. References
  9. Authors
  10. Footnotes
  11. Figures
  12. Tables
Read the related Technical Perspective
Hacking Nondeterminism with Induction and Coinduction, illustration

We introduce bisimulation up to congruence as a technique for proving language equivalence of nondeterministic finite automata. Exploiting this technique, we devise an optimization of the classic algorithm by Hopcroft and Karp.13 We compare our approach to the recently introduced antichain algorithms and we give concrete examples where we exponentially improve over antichains. Experimental results show significant improvements.

Back to Top

1. Introduction

Checking language equivalence of finite automata is a classic problem in computer science, with many applications in areas ranging from compilers to model checking.

Equivalence of deterministic finite automata (DFA) can be checked either via minimization12 or through Hopcroft and Karp’s algorithm,13 which exploits an instance of what is nowadays called a coinduction proof principle17, 20, 22: two states are equivalent if and only if there exists a bisimulation relating them. In order to check the equivalence of two given states, Hopcroft and Karp’s algorithm creates a relation containing them and tries to build a bisimulation by adding pairs of states to this relation: if it succeeds then the two states are equivalent, otherwise they are different.

On the one hand, minimization algorithms have the advantage of checking the equivalence of all the states at once, while Hopcroft and Karp’s algorithm only checks a given pair of states. On the other hand, they have the disadvantage of needing the whole automata from the beginning, while Hopcroft and Karp’s algorithm can be executed “on-the-fly,”8 on a lazy DFA whose transitions are computed on demand.

This difference is essential for our work and for other recently introduced algorithms based on antichains.1, 7, 25 Indeed, when starting from nondeterministic finite automata (NFA), determinization induces an exponential factor. In contrast, the algorithm we introduce in this work for checking equivalence of NFA (as well as those using antichains) usually does not build the whole deterministic automaton, but just a small part of it. We write “usually” because in few cases, the algorithm can still explore an exponential number of states.

Our algorithm is grounded on a simple observation on DFA obtained by determinizing an NFA: for all sets X and Y of states of the original NFA, the union (written +) of the language recognized by X (written [X]) and the language recognized by Y ([Y]) is equal to the language recognized by the union of X and Y ([X + Y]). In symbols:

eq01.gif

This fact leads us to introduce a sound and complete proof technique for language equivalence, namely bisimulation up to context, that exploits both induction (on the operator +) and coinduction: if a bisimulation R relates the set of states X1 with Y1 and X2 with Y2, then [X1] = [Y1] and [X2] = [Y2] and, by Equation (1), we can immediately conclude that X1 + X2 and Y1 + Y2 are language equivalent as well. Intuitively, bisimulations up to context are bisimulations which do not need to relate X1 + X2 with Y1 + Y2 when X1 is already related with Y1 and X2 with Y2.

To illustrate this idea, let us check the equivalence of states x and u in the following NFA. (Final states are overlined, labeled edges represent transitions.)

ueq01.gif

The determinized automaton is depicted below.

ueq02.gif

Each state is a set of states of the NFA. Final states are overlined: they contain at least one final state of the NFA. The numbered lines show a relation which is a bisimulation containing x and u. Actually, this is the relation that is built by Hopcroft and Karp’s algorithm (the numbers express the order in which pairs are added).

The dashed lines (numbered by 1, 2, 3) form a smaller relation which is not a bisimulation, but a bisimulation up to context: the equivalence of {x, y} and {u, v, w} is deduced from the fact that {x} is related with {u} and {y} with {v, w}, without the need to further explore the automaton.

Bisimulations up-to, and in particular bisimulations up to context, have been introduced in the setting of concurrency theory17, 21 as a proof technique for bisimilarity of CCS or π-calculus processes. As far as we know, they have never been used for proving language equivalence of NFA.

Among these techniques one should also mention bisimulation up to equivalence, which, as we show in this paper, is implicitly used in Hopcroft and Karp’s original algorithm. This technique can be explained by noting that not all bisimulations are equivalence relations: it might be the case that a bisimulation relates X with Y and Y with Z, but not X with Z. However, since [X] = [Y] and [Y] = [Z], we can immediately conclude that X and Z recognize the same language. Analogously to bisimulations up to context, a bisimulation up to equivalence does not need to relate X with Z when they are both related with some Y.

The techniques of up-to equivalence and up-to context can be combined, resulting in a powerful proof technique which we call bisimulation up to congruence. Our algorithm is in fact just an extension of Hopcroft and Karp’s algorithm that attempts to build a bisimulation up to congruence instead of a bisimulation up to equivalence. An important property when using up to congruence is that we do not need to build the whole deterministic automata. For instance, in the above NFA, the algorithm stops after relating z with u + w and does not build the remaining states. Despite their use of the up to equivalence, this is not the case with Hopcroft and Karp’s algorithm, where all accessible subsets of the deterministic automata have to be visited at least once.

The ability of visiting only a small portion of the determinized automaton is also the key feature of the antichain algorithm25 and its optimization exploiting similarity.1, 7 The two algorithms are designed to check language inclusion rather than equivalence and, for this reason, they do not exploit equational reasoning. As a consequence, the antichain algorithm usually needs to explore more states than ours. Moreover, we show how to integrate the optimization proposed in Abdulla et al.1 and Doyen and Raskin7 in our setting, resulting in an even more efficient algorithm.

*  Outline

Section 2 recalls Hopcroft and Karp’s algorithm for DFA, showing that it implicitly exploits bisimulation up to equivalence. Section 3 describes the novel algorithm, based on bisimulations up to congruence. We compare this algorithm with the antichain one in Section 4.

Back to Top

2. Deterministic Automata

A deterministic finite automaton (DFA) over the alphabet A is a triple (S, o, t), where S is a finite set of states, o: S → 2 is the output function, which determines if a state xS is final (o(x) = 1) or not (o(x) = 0), and t: SSA is the transition function which returns, for each state x and for each letter aA, the next state ta(x). Any DFA induces a function [·] mapping states to formal languages (P(A*)), defined by [x](ε) = o(x) for the empty word, and [x](aw) = [ta(x)] (w) otherwise. For a state x, [x] is called the language accepted by x.

Throughout this paper, we consider a fixed automaton (S, o, t) and study the following problem: given two states x1, x2 in S, is it the case that they are language equivalent, that is, [x1] = [x2]? This problem generalizes the familiar problem of checking whether two automata accept the same language: just take the union of the two automata as the automaton (S, o, t), and determine whether their respective starting states are language equivalent.

*  2.1. Language equivalence via coinduction

We first define bisimulation. We make explicit the underlying notion of progression, which we need in the sequel.

DEFINITION 1 (PROGRESSION, BISIMULATION). Given two relations R, R‘ ⊆ S2 on states, R progresses to R’, denoted R cacm5802_k.gif R’, if whenever x R y then

  1. o(x) = o(y) and
  2. for all aA, ta(x) Rta(y).

A bisimulation is a relation R such that R cacm5802_k.gif R.

As expected, bisimulation is a sound and complete proof technique for checking language equivalence of DFA:

PROPOSITION 1 (COINDUCTION). Two states are language equivalent iff there exists a bisimulation that relates them.

*  2.2. Naive algorithm

Figure 1 shows a naive version of Hopcroft and Karp’s algorithm for checking language equivalence of the states x and y of a deterministic finite automaton (S, o, t). Starting from x and y, the algorithm builds a relation R that, in case of success, is a bisimulation.

PROPOSITION 2. For all x, yS, x ~ y iff Naive(x, y).

PROOF. We first observe that if Naive(x, y) returns true then the relation R that is built before arriving to step 4 is a bisimulation. Indeed, the following proposition is an invariant for the loop corresponding to step 3:

ueq03.gif

Since todo is empty at step 4, we have R cacm5802_k.gif R, that is, R is a bisimulation. By Proposition 1, x ~ y. On the other hand, Naive(x, y) returns false as soon as it finds a word which is accepted by one state and not the other.

For example, consider the DFA with input alphabet A = {a} in the left-hand side of Figure 2, and suppose we want to check that x and u are language equivalent.

During the initialization, (x, u) is inserted in todo. At the first iteration, since o(x) = 0 = o(u), (x, u) is inserted in R and (y, v) in todo. At the second iteration, since o(y) = 1 = o(v), (y, v) is inserted in R and (z, w) in todo. At the third iteration, since o(z) = 0 = o(w), (z, w) is inserted in R and (y, v) in todo. At the fourth iteration, since (y, v) is already in R, the algorithm does nothing. Since there are no more pairs to check in todo, the relation R is a bisimulation and the algorithm terminates returning true.

These iterations are concisely described by the numbered dashed lines in Figure 2. The line i means that the connected pair is inserted in R at iteration i. (In the sequel, when enumerating iterations, we ignore those where a pair from todo is already in R so that there is nothing to do.)

In the previous example, todo always contains at most one pair of states but, in general, it may contain several of them. We do not specify here how to choose the pair to extract in step 3.1; we discuss this point in Section 3.2.

*  2.3. Hopcroft and Karp’s algorithm

The naive algorithm is quadratic: a new pair is added to R at each nontrivial iteration, and there are only n2 such pairs, where n = |S| is the number of states of the DFA. To make this algorithm (almost) linear, Hopcroft and Karp actually record a set of equivalence classes rather than a set of visited pairs. As a consequence, their algorithm may stop earlier it encounters a pair of states that is not already in R but belongs to its reflexive, symmetric, and transitive closure. For instance, in the right-hand side example from Figure 2, we can stop when we encounter the dotted pair (y, w) since these two states already belong to the same equivalence class according to the four previous pairs.

With this optimization, the produced relation R contains at most n pairs. Formally, ignoring the concrete data structure used to store equivalence classes, Hopcroft and Karp’s algorithm consists in replacing step 3.2 in Figure 1 with

ueq04.gif

where e: P(S2) → P(S2) is the function mapping each relation RS2 into its symmetric, reflexive, and transitive closure. We refer to this algorithm as HK.

*  2.4. Bisimulations up-to

We now show that the optimization used by Hopcroft and Karp corresponds to exploiting an “up-to technique.”

DEFINITION 2 (BISIMULATION UP-TO). Let f: P(S2) → P(S2) be a function on relations. A relation R is a bisimulation up to f if R cacm5802_k.gif f(R), i.e., if x R y, then

  1. o(x) = o(y) and
  2. for all aA, ta(x) f (R) ta(y).

With this definition, Hopcroft and Karp’s algorithm just consists in trying to build a bisimulation up to e. To prove the correctness of the algorithm, it suffices to show that any bisimulation up to e is contained in a bisimulation. To this end, we have the notion of compatible function19, 21:

DEFINITION 3 (COMPATIBLE FUNCTION). A function f: P(S2) → P(S2) is compatible if it is monotone and it preserves progressions: for all R, R‘ ⊆ S2,

ueq05.gif

PROPOSITION 3. Let f be a compatible function. Any bisimulation up to f is contained in a bisimulation.

We could prove directly that e is a compatible function; we, however, take a detour to ease our correctness proof for the algorithm we propose in Section 3.

LEMMA 1. The following functions are compatible:

ins01.gif

Intuitively, given a relation R, (s ∪ id)(R) is the symmetric closure of R, (rs ∪ id)(R) is its reflexive and symmetric closure, and (rst ∪ id)ω(R) is its symmetric, reflexive, and transitive closure: e = (rst ∪ id)ω. Another way to understand this decomposition of e is to recall that e(R) can be defined inductively by the following rules:

ueq06.gif

THEOREM 1. Any bisimulation up to e is contained in a bisimulation.

COROLLARY 1. For all x, yS, x ~ y iff HK(x, y).

PROOF. Same proof as for Proposition 2, by using the invariant R cacm5802_k.gif e(R) ∪ todo. We deduce that R is a bisimulation up to e after the loop. We conclude with Theorem 1 and Proposition 1.

Returning to the right-hand side example from Figure 2, Hopcroft and Karp’s algorithm constructs the relation

ueq07.gif

which is not a bisimulation, but a bisimulation up to e: it contains the pair (x, u), whose b-transitions lead to (y, w), which is not in RHK but in its equivalence closure, e(RHK).

Back to Top

3. Nondeterministic Automata

We now move from DFA to nondeterministic automata (NFA). An NFA over the alphabet A is a triple (S, o, t), where S is a finite set of states, o: S → 2 is the output function, and t: SP(S)A is the transition relation: it assigns to each state xS and letter aA a set of possible successors.

The powerset construction transforms any NFA (S, o, t) into the DFA (P(S), o#, t#) where o#: P(S) → 2 and t#: P(S) → P(S)A are defined for all XP(S) and aA as follows:

ueq08.gif

(Here we use the symbol “+” to denote both set-theoretic union and Boolean or; similarly, we use 0 to denote both the empty set and the Boolean “false.”) Observe that in (P(S), o#, t#), the states form a semilattice (P(S), +, 0), and o# and t# are, by definition, semilattices homomorphisms. These properties are fundamental for the up-to technique we are going to introduce. In order to stress the difference with generic DFA, which usually do not carry this structure, we use the following definition.

DEFINITION 4. A determinized NFA is a DFA (P(S), o#, t#) obtained via the powerset construction of some NFA (S, o, t).

Hereafter, we use a new notation for representing states of determinized NFA: in place of the singleton {x}, we just write x and, in place of {x1,…, xn}, we write x1 + … + xn. Consider for instance the NFA (S, o, t) depicted below (left) and part of the determinized NFA (P(S), o#, t#) (right).

ueq09.gif

In the determinized NFA, x makes one single a-transition into y + z. This state is final: o#(y + z) = o#(y) + o#(z) = o(y) + o(z) = 1 + 0 = 1; it makes an a-transition into cacm5802_l.gif (y + z) = cacm5802_l.gif (y) + cacm5802_l.gif (z) = ta(y) + ta(z) = x + y.

Algorithms for NFA can be obtained by computing the determinized NFA on-the-fly8: starting from the algorithms for DFA (Figure 1), it suffices to work with sets of states, and to inline the powerset construction. The corresponding code is given in Figure 3. The naive algorithm (Naive) does not use any up to technique, Hopcroft and Karp’s algorithm (HK) reasons up to equivalence in step 3.2.

*  3.1. Bisimulation up to congruence

The semilattice structure (P(S), +, 0) carried by determinized NFA makes it possible to introduce a new up-to technique, which is not available with plain DFA: up to congruence. This technique relies on the following function.

DEFINITION 5 (CONGRUENCE CLOSURE). Let u: P(P(S)2) → P(P(S)2) be the function on relations on sets of states defined for all RP(S)2 as:

ueq10.gif

The function c = (rstu ∪ id)ω is called the congruence closure function.

Intuitively, c(R) is the smallest equivalence relation which is closed with respect to + and which includes R. It could alternatively be defined inductively using the rules r, s, t, and id from the previous section, and the following one:

ueq11.gif

DEFINITION 6 (BISIMULATION UP TO CONGRUENCE). A bisimulation up to congruence for an NFA (S, o, t) is a relation RP(S)2, such that whenever X R Y then

  1. o#(X) = o#(Y) and
  2. for all aA, ta# (X) c(R) ta# (Y).

LEMMA 2. The function u is compatible.

THEOREM 2. Any bisimulation up to congruence is contained in a bisimulation.

We already gave in the Introduction section an example of bisimulation up to context, which is a particular case of bisimulation up to congruence (up to context means up to (ru ∪ id)ω, without closing under s and t).

Figure 4 shows a more involved example illustrating the use of all ingredients of the congruence closure function (c). The relation R expressed by the dashed numbered lines (formally R = {(x, u), (y + z, u)}) is neither a bisimulation nor a bisimulation up to e since y + z cacm5802_m.gif x + y and u cacm5802_m.gif u, but (x + y, u) ∉ e(R). However, R is a bisimulation up to congruence. Indeed, we have (x + y, u) ∈ c(R):

ueq12.gif

In contrast, we need four pairs to get a bisimulation up to equivalence containing (x, u): this is the relation depicted with both dashed and dotted lines in Figure 4.

Note that we can deduce many other equations from R; in fact, c(R) defines the following partition of sets of states: {0}, {y}, {z}, {x, u, x + y, x + z, and the 9 remaining subsets}.

*  3.2. Optimized algorithm for NFA

The optimized algorithm, called HKC in the sequel, relies on up to congruence: step 3.2 from Figure 3 becomes

ueq13.gif

Observe that we use c(Rtodo) rather than c(R): this allows us to skip more pairs, and this is safe since all pairs in todo will eventually be processed.

COROLLARY 2. For all X, YP(S), X ~ Y iff HKC(X, Y).

PROOF. Same proof as for Proposition 2, by using the invariant R cacm5802_k.gif c(Rtodo) for the loop. We deduce that R is a bisimulation up to congruence after the loop. We conclude with Theorem 2 and Proposition 1.

The most important point about these three algorithms is that they compute the states of the determinized NFA lazily. This means that only accessible states need to be computed, which is of practical importance since the determinized NFA can be exponentially large. In case of a negative answer, the three algorithms stop even before all accessible states have been explored; otherwise, if a bisimulation (possibly up-to) is found, it depends on the algorithm:

  • With Naive, all accessible states need to be visited, by definition of bisimulation.
  • With HK, the only case where some accessible states can be avoided is when a pair (X, X) is encountered: the algorithm skips this pair so that the successors of X are not necessarily computed (this situation never happens when starting with disjoint automata). In the other cases where a pair (X, Y) is skipped, X and Y are necessarily already related with some other states in R, so that their successors will eventually be explored.
  • With HKC, accessible states are often skipped. For a simple example, let us execute HKC on the NFA from Figure 4. After two iterations, R = {(x, u), (y + z, u)}. Since x + y c(R) u, the algorithm stops without building the states x + y and x + y + z. Similarly, in the example from the Introduction section, HKC does not construct the four states corresponding to pairs 4, 5, and 6.

This ability of HKC to ignore parts of the determinized NFA can bring an exponential speedup. For an example, consider the family of NFA in Figure 5, where n is an arbitrary natural number. Taken together, the states x and y are equivalent to z: they recognize the language (a + b)*(a + b)n+1. Alone, x recognizes the language (a + b)*a(a + b)n, which is known for having a minimal DFA with 2n states.

Therefore, checking x + y ~ z via minimization (as in Hopcroft12) requires exponential time, and the same holds for Naive and HK since all accessible states must be visited. This is not the case with HKC, which requires only polynomial time in this example. Indeed, HKC(x + y, z) builds the relation

ueq14.gif

where Yi = y + y1 + … + yi and Zi = z + z1 + … +zi. R‘ only contains 2n + 1 pairs and is a bisimulation up to congruence. To see this, consider the pair (x + y + x1 + y2, Z2) obtained from (x + y, z) after reading the word ba. Although this pair does not belong to R‘, it belongs to its congruence closure:

ueq15.gif

REMARK 1. In the above derivation, the use of transitivity is crucial: R’ is a bisimulation up to congruence, but not a bisimulation up to context. In fact, there exists no bisimulation up to context of linear size proving x + y ~ z.

We now discuss the exploration strategy, that is, how to choose the pair to extract from the set todo in step 3.1. When looking for a counterexample, such a strategy has a large influence: a good heuristic can help in reaching it directly, while a bad one might lead to explore exponentially many pairs first. In contrast, the strategy does not impact much looking for an equivalence proof (when the algorithm eventually returns true). Actually, one can prove that the number of steps performed by Naive and HK in such a case does not depend on the strategy. This is not the case with HKC: the strategy can induce some differences. However, we experimentally observed that breadth-first and depth-first strategies usually behave similarly on random automata. This behavior is due to the fact that we check congruence w.r.t. Rtodo rather than just R (step 3.2): with this optimization, the example above is handled in polynomial time whatever the chosen strategy. In contrast, without this small optimization, it requires exponential time with a depth-first strategy.

*  3.3. Computing the congruence closure

For the optimized algorithm to be effective, we need a way to check whether some pairs belong to the congruence closure of a given relation (step 3.2). We present a simple solution based on set rewriting; the key idea is to look at each pair (X, Y) in a relation R as a pair of rewriting rules:

ueq16.gif

which can be used to compute normal forms for sets of states. Indeed, by idempotence, X R Y entails X c(R) X + Y.

DEFINITION 7. Let RP(S)2 be a relation on sets of states. We define cacm5802_n.gif P(S)2 as the smallest irreflexive relation that satisfies the following rules:

ueq17.gif

LEMMA 3. For all relations R, cacm5802_n.gif is confluent and normalizing.

In the sequel, we denote by XR the normal form of a set X w.r.t. cacm5802_n.gif . Intuitively, the normal form of a set is the largest set of its equivalence class. Recalling the example from Figure 4, the common normal form of x + y and u can be computed as follows (R is the relation {(x, u), (y + z, u)}):

ueq18.gif

THEOREM 3. For all relations R, and for all X, YP(S), we have XR = YR iff (X, Y) ∈ c(R).

We actually have XR = YR iff XYR and YXR, so that the normal forms of X and Y do not necessarily need to be fully computed in practice. Still, the worst-case complexity of this subalgorithm is quadratic in the size of the relation R (assuming we count the number of operations on sets: unions and inclusion tests).

Note that many algorithms were proposed in the literature to compute the congruence closure of a relation (see, e.g., Nelson and Oppen,18 Shostak,23 and Bachmair et al.2). However, they usually consider uninterpreted symbols or associative and commutative symbols, but not associative, commutative, and idempotent symbols, which is what we need here.

*  3.4. Using HKC for checking language inclusion

For NFA, language inclusion can be reduced to language equivalence: the semantics function [−] is a semilattice homomorphism, so that for all sets of states X, Y, [X + Y] = [Y] iff [X] + [Y] = [Y] iff [X] ⊆ [Y]. Therefore, it suffices to run HKC(X + Y, Y) to check the inclusion [X] ⊆ [Y].

In such a situation, all pairs that are eventually manipulated by HKC have the shape (X‘ + Y‘, Y‘) for some sets X‘, Y‘. Step 3.2 of HKC can thus be simplified. First, the pairs in the current relation only have to be used to rewrite from right to left. Second, the following lemma shows that we do not necessarily need to compute normal forms:

LEMMA 4. For all sets X, Y and for all relations R, we have X + Y c(R) Y iff XYR.

At this point, the reader might wonder whether checking the two inclusions separately is more convenient than checking the equivalence directly. This is not the case: checking the equivalence directly actually allows one to skip some pairs that cannot be skipped when reasoning by double inclusion. As an example, consider the DFA on the right of Figure 2. The relation computed by HKC(x, u) contains only four pairs (because the fifth one follows from transitivity). Instead, the relations built by HKC(x, x + u) and HKC(u + x, u) would both contain five pairs: transitivity cannot be used since our relations are now oriented (from yv, zv, and zw, we cannot deduce yw). Figure 5 shows another example, where we get an exponential factor by checking the equivalence directly rather than through the two inclusions: transitivity, which is crucial to keep the relation computed by HKC(x + y, z) small (see Remark 1), cannot be used when checking the two inclusions separately.

In a sense, the behavior of the coinduction proof method here is similar to that of standard proofs by induction, where one often has to strengthen the induction predicate to get a (nicer) proof.

*  3.5. Exploiting similarity

Looking at the example in Figure 5, a natural idea would be to first quotient the automaton by graph isomorphism. By doing so, one would merge the states xi, yi, zi, and one would obtain the following automaton, for which checking x + y ~ z is much easier.

ueq19.gif

As shown in Abdulla et al.1 and Doyen and Raskin7 for antichain algorithms, one can do better, by exploiting any preorder contained in language inclusion. Hereafter, we show how this idea can be embedded in HKC, resulting in an even stronger algorithm.

For the sake of clarity, we fix the preorder to be similarity,17 which can be computed in quadratic time.10

DEFINITION 8 (SIMILARITY). Similarity is the largest relation on states cacm5802_o.gif S2 such that x cacm5802_o.gif y entails:

  1. o(x) ≤ o(y) and
  2. for all aA, x‘ ∈ S such that x cacm5802_m.gif x’, there exists some ysuch that y cacm5802_m.gif y’ and x cacm5802_o.gif y‘.

To exploit similarity pairs in HKC, it suffices to notice that for any similarity pair x cacm5802_o.gif y, we have x + y ~ y. Let cacm5802_p.gif denote the relation {(x + y, y) | x cacm5802_o.gif y}, let r‘ denote the constant-to- cacm5802_p.gif function, and let c‘ = (r‘ ∪ stuid)ω. Accordingly, we call HKC' the algorithm obtained from HKC (Figure 3) by replacing (X, Y) ∈ c(Rtodo) with (X, Y) ∈ c‘(Rtodo) in step 3.2. The latter test can be reduced to rewriting thanks to Theorem 3 and the following lemma.

LEMMA 5. For all relations R, c‘(R) = c(R cacm5802_p.gif ).

THEOREM 4. Any bisimulation up to c’ is contained in a bisimulation.

COROLLARY 3. For all sets X, Y, X ~ Y iff HKC'(X, Y).

Back to Top

4. Antichain Algorithms

Even though the problem of deciding NFA equivalence is PSPACE-complete,16 neither HKC nor HKC' are in PSPACE: both of them keep track of the states they explored in the determinized NFA, and there can be exponentially many such states. This also holds for HK and for the more recent antichain algorithm25 (called AC in the following) and its optimization (AC') exploiting similarity.1, 7

The latter algorithms can be explained in terms of coinductive proof techniques: we establish in Bonchi and Pous4 that they actually construct bisimulations up to context, that is, bisimulations up to congruence for which one does not exploit symmetry and transitivity.

Theoretical comparison. We compared the various algorithms in details in Bonchi and Pous.4 Their relationship is summarized in Figure 6, where an arrow XY means that (a) Y can explore exponentially fewer states than X and (b) Y can mimic X, that is, the coinductive proof technique underlying Y is at least as powerful as the one of X.

In the general case, AC needs to explore much more states than HKC: the use of transitivity, which is missing in AC, allows HKC to drastically prune the exploration. For instance, to check x + y ~ z in Figure 5, HKC only needs a linear number of states (see Remark 1), while AC needs exponentially many states. In contrast, in the special case where one checks for the inclusion of disjoint automata, HKC and AC exhibit the same behavior. Indeed, HKC cannot make use of transitivity in such a situation, as explained in Section 3.4. Things change when comparing HKC' and AC': even for checking inclusion of disjoint automata, AC' cannot always mimic HKC': the use of similarity tends to virtually merge states, so that HKC' can use the up-to transitivity technique which AC' lack.

Experimental comparison. The theoretical relationships drawn in Figure 6 are substantially confirmed by an empirical evaluation of the performance of the algorithms. Here, we only give a brief overview; see Bonchi and Pous4 for a complete description of those experiments.

We compared our OCaml implementation4 for HK, HKC, and HKC', and the libvata C++ library14 for AC and AC'. We use a breadth-first exploration strategy: we represent the set todo from Figure 3 as a FIFO queue. As mentioned at the end of Section 3.2, considering a depth-first strategy here does not alter the behavior of HKC in a noticeable way.

We performed experiments using both random automata and a set of automata arising from model-checking problems.

  • Random automata. We used Tabakov and Vardi’s model24 to generate 1000 random NFA with two letters and a given number of states. We executed all algorithms on these NFA, and we measured the number of processed pairs, that is, the number of required iterations (like HKC, AC is a loop inside which pairs are processed). We observe that HKC improves over AC by one order of magnitude, and AC improves over HK by two orders of magnitude. Using up-to similarity (HKC' and AC') does not improve much; in fact, similarity is almost the identity relation on such random automata. The corresponding distributions for HK, HKC, and AC are plotted in Figure 7, for automata with 100 states. Note that while HKC only improves by one order of magnitude over AC when considering the average case, it improves by several orders of magnitude when considering the worst cases.
  • Model-checking automata. Abdulla et al.1, 7 used automata sequences arising from regular model-checking experiments5 to compare their algorithm (AC') against AC. We reused these sequences to test HKC' against AC' in a concrete scenario. For all those sequences, we checked the inclusions of all consecutive pairs, in both directions. The timings are given in Table 1, where we report the median values (50%), the last deciles (90%), the last percentiles (99%), and the maximum values (100%). We distinguish between the experiments for which a counterexample was found, and those for which the inclusion did hold. For HKC' and AC', we display the time required to compute similarity on a separate line: this preliminary step is shared by the two algorithms. As expected, HKC and AC roughly behave the same: we test inclusions of disjoint automata. HKC' is however quite faster than AC': up-to transitivity can be exploited, thanks to similarity pairs. Also note that over the 546 positive answers, 368 are obtained immediately by similarity.

Back to Top

5. Conclusion

Our implementation of HKC is available online,4 together with proofs mechanized in the Coq proof assistant and an interactive applet making it possible to test the presented algorithms online, on user-provided examples.

Several notions analogous to bisimulations up to congruence can be found in the literature. For instance, self-bisimulations6, 11 have been used to obtain decidability and complexity results about context-free processes. The main difference with bisimulation up to congruence is that self-bisimulations are proof techniques for bisimilarity rather than language equivalence. Other approaches that are independent from the equivalence (like bisimilarity or language) are shown in Lenisa,15 Bartels,3 and Pous.19 These papers propose very general frameworks into which our up to congruence technique fits as a very special case. However, to our knowledge, bisimulation up to congruence has never been proposed before as a technique for proving language equivalence of NFA.

We conclude with directions for future work.

Complexity. The presented algorithms, as well as those based on antichains, have exponential complexity in the worst case while they behave rather well in practice. For instance, in Figure 7, one can notice that over a thousand random automata, very few require to explore a large amount of pairs. This suggests that an accurate analysis of the average complexity might be promising. An inherent problem comes from the difficulty to characterize the average shape of determinized NFA.24 To avoid this problem, with HKC, we could try to focus on the properties of congruence relations. For instance, given a number of states, how long can be a sequence of (incrementally independent) pairs of sets of states whose congruence closure collapses into the full relation? (This number is an upper-bound for the size of the relations produced by HKC.) One can find ad hoc examples where this number is exponential, but we suspect it to be rather small in average.

Model checking. The experiments summarized in Table 1 show the efficiency of our approach for regular model checking using automata on finite words. As in the case of antichains, our approach extends to automata on finite trees. We plan to implement such a generalization and link it with tools performing regular tree model-checking.

In order to face other model-checking problems, it would be useful to extend up-to techniques to automata on infinite words, or trees. Unfortunately, the determinization of these automata (the so-called Safra’s construction) does not seem suitable for exploiting neither antichains nor up to congruence. However, for some problems like LTL realizability9 that can be solved without prior determinization (the so-called Safraless approaches), antichains have been crucial in obtaining efficient procedures. We leave as future work to explore whether up-to techniques could further improve such procedures.

Back to Top

Acknowledgments

This work was partially funded by the PiCoq (ANR-10-BLAN-0305) and PACE (ANR-12IS02001) projects.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. Naive algorithm for checking the equivalence of states x and y of a DFA (S, o, t). The code of HK(x, y) is obtained by replacing the test in step 3.2 with (x’, y’) ∈ e(R).

F2 Figure 2. Checking for DFA equivalence.

F3 Figure 3. On-the-fly naive algorithm, for checking the equivalence of sets of states X and Y of an NFA (S, o, t). HK(X, Y) is obtained by replacing the test in step 3.2 with (X‘, Y‘) ∈ e(R), and HKC(X, Y) is obtained by replacing it with (X‘, Y‘) ∈ c(Rtodo).

F4 Figure 4. A bisimulation up to congruence.

F5 Figure 5. Family of examples where HKC exponentially improves over AC and HK; we have x + y ~ z.

F6 Figure 6. Relationships among the algorithms.

F7 Figure 7. Distributions of the number of processed pairs, for 1000 experiments with random NFA.

UF1 Figure. Watch the authors discuss this work in this exclusive Communications video.

Back to Top

Tables

T1 Table 1. Timings, in seconds, for language inclusion of disjoint NFA generated from model checking.

Back to top

    1. Abdulla, P.A., Chen, Y.F., Holík, L., Mayr, R., and Vojnar, T. When simulation meets antichains. In TACAS, J. Esparza and R. Majumdar, eds. Volume 6015 of Lecture Notes in Computer Science (2010). Springer, 158–174.

    2. Bachmair, L., Ramakrishnan, I.V., Tiwari, A., and Vigneron, L. Congruence closure modulo associativity and commutativity. In FroCoS, H. Kirchner and C. Ringeissen, eds. Volume 1794 of Lecture Notes in Computer Science (2000). Springer, 245–259.

    3. Bartels, F. Generalised coinduction. Math. Struct. Comp. Sci. 13, 2 (2003), 321–348.

    4. Bonchi, F. and Pous, D. Extended version of this abstract, with omitted proofs, and web appendix for this work. http://hal.inria.fr/hal-00639716/ and http://perso.ens-lyon.fr/damien.pous/hknt, 2012.

    5. Bouajjani, A., Habermehl, P., and Vojnar, T. Abstract regular model checking. In CAV, R. Alur and D. Peled, eds. Volume 3114 of Lecture Notes in Computer Science (2004). Springer.

    6. Caucal, D. Graphes canoniques de graphes algébriques. ITA 24 (1990), 339–352.

    7. Doyen, L. and Raskin, J.F. Antichain algorithms for finite automata. In TACAS, J. Esparza and R. Majumdar, eds. Volume 6015 of Lecture Notes in Computer Science (2010). Springer.

    8. Fernandez, J.C., Mounier, L., Jard, C., and Jéron, T. On-the-fly verification of finite transition systems. Formal Meth. Syst. Design 1, 2/3 (1992), 251–273.

    9. Filiot, E., Jin, N., and Raskin, J.F. An antichain algorithm for LTL realizability. In CAV, A. Bouajjani and O. Maler, eds. Volume 5643 of Lecture Notes in Computer Science (2009). Springer, 263–277.

    10. Henzinger, M.R., Henzinger, T.A., and Kopke, P.W. Computing simulations on finite and infinite graphs. In Proceedings of 36th Annual Symposium on Foundations of Computer Science (Milwaukee, WI, October 23–25, 1995). IEEE Computer Society Press.

    11. Hirshfeld, Y., Jerrum, M., and Moller, F. A polynomial algorithm for deciding bisimilarity of normed context-free processes. TCS 158, 1&2 (1996), 143–159.

    12. Hopcroft, J.E. An n log n algorithm for minimizing in a finite automaton. In International Symposium of Theory of Machines and Computations. Academic Press, 1971, 189–196.

    13. Hopcroft, J.E. and Karp, R.M. A linear algorithm for testing equivalence of finite automata. Technical Report 114. Cornell University, December 1971.

    14. Lengál, O., Simácek, J., and Vojnar, T. Vata: A library for efficient manipulation of non-deterministic tree automata. In TACAS, C. Flanagan and B. König, eds. Volume 7214 of Lecture Notes in Computer Science (2012). Springer, 79–94.

    15. Lenisa, M. From set-theoretic coinduction to coalgebraic coinduction: Some results, some problems. ENTCS 19 (1999), 2–22.

    16. Meyer, A. and Stockmeyer, L.J. Word problems requiring exponential time. In STOC. ACM, 1973, 1–9.

    17. Milner, R. Communication and Concurrency. Prentice Hall, 1989.

    18. Nelson, G. and Oppen, D.C. Fast decision procedures based on congruence closure. J. ACM 27, 2 (1980), 356–364.

    19. Pous, D. Complete lattices and up-to techniques. In APLAS, Z. Shao, ed. Volume 4807 of Lecture Notes in Computer Science (2007). Springer, 351–366.

    20. Rutten, J. Automata and coinduction (an exercise in coalgebra). In CONCUR, D. Sangiorgi and R. de Simone, eds. Volume 1466 of Lecture Notes in Computer Science (1998). Springer, 194–218.

    21. Sangiorgi, D. On the bisimulation proof method. Math. Struct. Comp. Sci. 8 (1998), 447–479.

    22. Sangiorgi, D. Introduction to Bisimulation and Coinduction. Cambridge University Press, 2011.

    23. Shostak, R.E. Deciding combinations of theories. J. ACM 31, 1 (1984), 1–12.

    24. Tabakov, D. and Vardi, M. Experimental evaluation of classical automata constructions. In LPAR, G. Sutcliffe and A. Voronkov, eds. Volume 3835 of Lecture Notes in Computer Science (2005). Springer, 396–411.

    25. Wulf, M.D., Doyen, L., Henzinger, T.A., and Raskin, J.F. Antichains: A new algorithm for checking universality of finite automata. In CAV, T. Ball and R.B. Jones, eds. Volume 4144 of Lecture Notes in Computer Science (2006). Springer, 17–30.

    Extended Abstract, a full version of this paper is available in Proceedings of POPL, 2013, ACM.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More