We introduce bisimulation up to congruence as a technique for proving language equivalence of nondeterministic finite automata. Exploiting this technique, we devise an optimization of the classic algorithm by Hopcroft and Karp.13 We compare our approach to the recently introduced antichain algorithms and we give concrete examples where we exponentially improve over antichains. Experimental results show significant improvements.
1. Introduction
Checking language equivalence of finite automata is a classic problem in computer science, with many applications in areas ranging from compilers to model checking.
Equivalence of deterministic finite automata (DFA) can be checked either via minimization12 or through Hopcroft and Karp’s algorithm,13 which exploits an instance of what is nowadays called a coinduction proof principle17, 20, 22: two states are equivalent if and only if there exists a bisimulation relating them. In order to check the equivalence of two given states, Hopcroft and Karp’s algorithm creates a relation containing them and tries to build a bisimulation by adding pairs of states to this relation: if it succeeds then the two states are equivalent, otherwise they are different.
On the one hand, minimization algorithms have the advantage of checking the equivalence of all the states at once, while Hopcroft and Karp’s algorithm only checks a given pair of states. On the other hand, they have the disadvantage of needing the whole automata from the beginning, while Hopcroft and Karp’s algorithm can be executed “on-the-fly,”8 on a lazy DFA whose transitions are computed on demand.
This difference is essential for our work and for other recently introduced algorithms based on antichains.1, 7, 25 Indeed, when starting from nondeterministic finite automata (NFA), determinization induces an exponential factor. In contrast, the algorithm we introduce in this work for checking equivalence of NFA (as well as those using antichains) usually does not build the whole deterministic automaton, but just a small part of it. We write “usually” because in few cases, the algorithm can still explore an exponential number of states.
Our algorithm is grounded on a simple observation on DFA obtained by determinizing an NFA: for all sets X and Y of states of the original NFA, the union (written +) of the language recognized by X (written [X]) and the language recognized by Y ([Y]) is equal to the language recognized by the union of X and Y ([X + Y]). In symbols:
This fact leads us to introduce a sound and complete proof technique for language equivalence, namely bisimulation up to context, that exploits both induction (on the operator +) and coinduction: if a bisimulation R relates the set of states X1 with Y1 and X2 with Y2, then [X1] = [Y1] and [X2] = [Y2] and, by Equation (1), we can immediately conclude that X1 + X2 and Y1 + Y2 are language equivalent as well. Intuitively, bisimulations up to context are bisimulations which do not need to relate X1 + X2 with Y1 + Y2 when X1 is already related with Y1 and X2 with Y2.
To illustrate this idea, let us check the equivalence of states x and u in the following NFA. (Final states are overlined, labeled edges represent transitions.)
The determinized automaton is depicted below.
Each state is a set of states of the NFA. Final states are overlined: they contain at least one final state of the NFA. The numbered lines show a relation which is a bisimulation containing x and u. Actually, this is the relation that is built by Hopcroft and Karp’s algorithm (the numbers express the order in which pairs are added).
The dashed lines (numbered by 1, 2, 3) form a smaller relation which is not a bisimulation, but a bisimulation up to context: the equivalence of {x, y} and {u, v, w} is deduced from the fact that {x} is related with {u} and {y} with {v, w}, without the need to further explore the automaton.
Bisimulations up-to, and in particular bisimulations up to context, have been introduced in the setting of concurrency theory17, 21 as a proof technique for bisimilarity of CCS or π-calculus processes. As far as we know, they have never been used for proving language equivalence of NFA.
Among these techniques one should also mention bisimulation up to equivalence, which, as we show in this paper, is implicitly used in Hopcroft and Karp’s original algorithm. This technique can be explained by noting that not all bisimulations are equivalence relations: it might be the case that a bisimulation relates X with Y and Y with Z, but not X with Z. However, since [X] = [Y] and [Y] = [Z], we can immediately conclude that X and Z recognize the same language. Analogously to bisimulations up to context, a bisimulation up to equivalence does not need to relate X with Z when they are both related with some Y.
The techniques of up-to equivalence and up-to context can be combined, resulting in a powerful proof technique which we call bisimulation up to congruence. Our algorithm is in fact just an extension of Hopcroft and Karp’s algorithm that attempts to build a bisimulation up to congruence instead of a bisimulation up to equivalence. An important property when using up to congruence is that we do not need to build the whole deterministic automata. For instance, in the above NFA, the algorithm stops after relating z with u + w and does not build the remaining states. Despite their use of the up to equivalence, this is not the case with Hopcroft and Karp’s algorithm, where all accessible subsets of the deterministic automata have to be visited at least once.
The ability of visiting only a small portion of the determinized automaton is also the key feature of the antichain algorithm25 and its optimization exploiting similarity.1, 7 The two algorithms are designed to check language inclusion rather than equivalence and, for this reason, they do not exploit equational reasoning. As a consequence, the antichain algorithm usually needs to explore more states than ours. Moreover, we show how to integrate the optimization proposed in Abdulla et al.1 and Doyen and Raskin7 in our setting, resulting in an even more efficient algorithm.
Section 2 recalls Hopcroft and Karp’s algorithm for DFA, showing that it implicitly exploits bisimulation up to equivalence. Section 3 describes the novel algorithm, based on bisimulations up to congruence. We compare this algorithm with the antichain one in Section 4.
2. Deterministic Automata
A deterministic finite automaton (DFA) over the alphabet A is a triple (S, o, t), where S is a finite set of states, o: S → 2 is the output function, which determines if a state x ∈ S is final (o(x) = 1) or not (o(x) = 0), and t: S → SA is the transition function which returns, for each state x and for each letter a ∈ A, the next state ta(x). Any DFA induces a function [·] mapping states to formal languages (P(A*)), defined by [x](ε) = o(x) for the empty word, and [x](aw) = [ta(x)] (w) otherwise. For a state x, [x] is called the language accepted by x.
Throughout this paper, we consider a fixed automaton (S, o, t) and study the following problem: given two states x1, x2 in S, is it the case that they are language equivalent, that is, [x1] = [x2]? This problem generalizes the familiar problem of checking whether two automata accept the same language: just take the union of the two automata as the automaton (S, o, t), and determine whether their respective starting states are language equivalent.
2.1. Language equivalence via coinduction
We first define bisimulation. We make explicit the underlying notion of progression, which we need in the sequel.
DEFINITION 1 (PROGRESSION, BISIMULATION). Given two relations R, R‘ ⊆ S2 on states, R progresses to R’, denoted R R’, if whenever x R y then
- o(x) = o(y) and
- for all a ∈ A, ta(x) R‘ ta(y).
A bisimulation is a relation R such that R R.
As expected, bisimulation is a sound and complete proof technique for checking language equivalence of DFA:
PROPOSITION 1 (COINDUCTION). Two states are language equivalent iff there exists a bisimulation that relates them.
Figure 1 shows a naive version of Hopcroft and Karp’s algorithm for checking language equivalence of the states x and y of a deterministic finite automaton (S, o, t). Starting from x and y, the algorithm builds a relation R that, in case of success, is a bisimulation.
PROPOSITION 2. For all x, y ∈ S, x ~ y iff Naive
(x, y).
PROOF. We first observe that if Naive
(x, y) returns true then the relation R that is built before arriving to step 4 is a bisimulation. Indeed, the following proposition is an invariant for the loop corresponding to step 3:
Since todo is empty at step 4, we have R
R, that is, R is a bisimulation. By Proposition 1, x ~ y. On the other hand, Naive
(x, y) returns false as soon as it finds a word which is accepted by one state and not the other.
For example, consider the DFA with input alphabet A = {a} in the left-hand side of Figure 2, and suppose we want to check that x and u are language equivalent.
During the initialization, (x, u) is inserted in todo. At the first iteration, since o(x) = 0 = o(u), (x, u) is inserted in R and (y, v) in todo. At the second iteration, since o(y) = 1 = o(v), (y, v) is inserted in R and (z, w) in todo. At the third iteration, since o(z) = 0 = o(w), (z, w) is inserted in R and (y, v) in todo. At the fourth iteration, since (y, v) is already in R, the algorithm does nothing. Since there are no more pairs to check in todo, the relation R is a bisimulation and the algorithm terminates returning true.
These iterations are concisely described by the numbered dashed lines in Figure 2. The line i means that the connected pair is inserted in R at iteration i. (In the sequel, when enumerating iterations, we ignore those where a pair from todo is already in R so that there is nothing to do.)
In the previous example, todo always contains at most one pair of states but, in general, it may contain several of them. We do not specify here how to choose the pair to extract in step 3.1
; we discuss this point in Section 3.2.
2.3. Hopcroft and Karp’s algorithm
The naive algorithm is quadratic: a new pair is added to R at each nontrivial iteration, and there are only n2 such pairs, where n = |S| is the number of states of the DFA. To make this algorithm (almost) linear, Hopcroft and Karp actually record a set of equivalence classes rather than a set of visited pairs. As a consequence, their algorithm may stop earlier it encounters a pair of states that is not already in R but belongs to its reflexive, symmetric, and transitive closure. For instance, in the right-hand side example from Figure 2, we can stop when we encounter the dotted pair (y, w) since these two states already belong to the same equivalence class according to the four previous pairs.
With this optimization, the produced relation R contains at most n pairs. Formally, ignoring the concrete data structure used to store equivalence classes, Hopcroft and Karp’s algorithm consists in replacing step 3.2
in Figure 1 with
where e: P(S2) → P(S2) is the function mapping each relation R ⊆ S2 into its symmetric, reflexive, and transitive closure. We refer to this algorithm as HK
.
We now show that the optimization used by Hopcroft and Karp corresponds to exploiting an “up-to technique.”
DEFINITION 2 (BISIMULATION UP-TO). Let f: P(S2) → P(S2) be a function on relations. A relation R is a bisimulation up to f if R f(R), i.e., if x R y, then
- o(x) = o(y) and
- for all a ∈ A, ta(x) f (R) ta(y).
With this definition, Hopcroft and Karp’s algorithm just consists in trying to build a bisimulation up to e. To prove the correctness of the algorithm, it suffices to show that any bisimulation up to e is contained in a bisimulation. To this end, we have the notion of compatible function19, 21:
DEFINITION 3 (COMPATIBLE FUNCTION). A function f: P(S2) → P(S2) is compatible if it is monotone and it preserves progressions: for all R, R‘ ⊆ S2,
PROPOSITION 3. Let f be a compatible function. Any bisimulation up to f is contained in a bisimulation.
We could prove directly that e is a compatible function; we, however, take a detour to ease our correctness proof for the algorithm we propose in Section 3.
LEMMA 1. The following functions are compatible:
Intuitively, given a relation R, (s ∪ id)(R) is the symmetric closure of R, (r ∪ s ∪ id)(R) is its reflexive and symmetric closure, and (r ∪ s ∪ t ∪ id)ω(R) is its symmetric, reflexive, and transitive closure: e = (r ∪ s ∪ t ∪ id)ω. Another way to understand this decomposition of e is to recall that e(R) can be defined inductively by the following rules:
THEOREM 1. Any bisimulation up to e is contained in a bisimulation.
COROLLARY 1. For all x, y ∈ S, x ~ y iff HK
(x, y).
PROOF. Same proof as for Proposition 2, by using the invariant R e(R) ∪ todo. We deduce that R is a bisimulation up to e after the loop. We conclude with Theorem 1 and Proposition 1.
Returning to the right-hand side example from Figure 2, Hopcroft and Karp’s algorithm constructs the relation
which is not a bisimulation, but a bisimulation up to e: it contains the pair (x, u), whose b-transitions lead to (y, w), which is not in RHK
but in its equivalence closure, e(RHK
).
3. Nondeterministic Automata
We now move from DFA to nondeterministic automata (NFA). An NFA over the alphabet A is a triple (S, o, t), where S is a finite set of states, o: S → 2 is the output function, and t: S → P(S)A is the transition relation: it assigns to each state x ∈ S and letter a ∈ A a set of possible successors.
The powerset construction transforms any NFA (S, o, t) into the DFA (P(S), o#, t#) where o#: P(S) → 2 and t#: P(S) → P(S)A are defined for all X ∈ P(S) and a ∈ A as follows:
(Here we use the symbol “+” to denote both set-theoretic union and Boolean or; similarly, we use 0 to denote both the empty set and the Boolean “false.”) Observe that in (P(S), o#, t#), the states form a semilattice (P(S), +, 0), and o# and t# are, by definition, semilattices homomorphisms. These properties are fundamental for the up-to technique we are going to introduce. In order to stress the difference with generic DFA, which usually do not carry this structure, we use the following definition.
DEFINITION 4. A determinized NFA is a DFA (P(S), o#, t#) obtained via the powerset construction of some NFA (S, o, t).
Hereafter, we use a new notation for representing states of determinized NFA: in place of the singleton {x}, we just write x and, in place of {x1,…, xn}, we write x1 + … + xn. Consider for instance the NFA (S, o, t) depicted below (left) and part of the determinized NFA (P(S), o#, t#) (right).
In the determinized NFA, x makes one single a-transition into y + z. This state is final: o#(y + z) = o#(y) + o#(z) = o(y) + o(z) = 1 + 0 = 1; it makes an a-transition into (y + z) = (y) + (z) = ta(y) + ta(z) = x + y.
Algorithms for NFA can be obtained by computing the determinized NFA on-the-fly8: starting from the algorithms for DFA (Figure 1), it suffices to work with sets of states, and to inline the powerset construction. The corresponding code is given in Figure 3. The naive algorithm (Naive
) does not use any up to technique, Hopcroft and Karp’s algorithm (HK
) reasons up to equivalence in step 3.2
.
3.1. Bisimulation up to congruence
The semilattice structure (P(S), +, 0) carried by determinized NFA makes it possible to introduce a new up-to technique, which is not available with plain DFA: up to congruence. This technique relies on the following function.
DEFINITION 5 (CONGRUENCE CLOSURE). Let u: P(P(S)2) → P(P(S)2) be the function on relations on sets of states defined for all R ⊆ P(S)2 as:
The function c = (r ∪ s ∪ t ∪ u ∪ id)ω is called the congruence closure function.
Intuitively, c(R) is the smallest equivalence relation which is closed with respect to + and which includes R. It could alternatively be defined inductively using the rules r, s, t, and id from the previous section, and the following one:
DEFINITION 6 (BISIMULATION UP TO CONGRUENCE). A bisimulation up to congruence for an NFA (S, o, t) is a relation R ⊆ P(S)2, such that whenever X R Y then
- o#(X) = o#(Y) and
- for all a ∈ A, ta# (X) c(R) ta# (Y).
LEMMA 2. The function u is compatible.
THEOREM 2. Any bisimulation up to congruence is contained in a bisimulation.
We already gave in the Introduction section an example of bisimulation up to context, which is a particular case of bisimulation up to congruence (up to context means up to (r ∪ u ∪ id)ω, without closing under s and t).
Figure 4 shows a more involved example illustrating the use of all ingredients of the congruence closure function (c). The relation R expressed by the dashed numbered lines (formally R = {(x, u), (y + z, u)}) is neither a bisimulation nor a bisimulation up to e since y + z x + y and u u, but (x + y, u) ∉ e(R). However, R is a bisimulation up to congruence. Indeed, we have (x + y, u) ∈ c(R):
In contrast, we need four pairs to get a bisimulation up to equivalence containing (x, u): this is the relation depicted with both dashed and dotted lines in Figure 4.
Note that we can deduce many other equations from R; in fact, c(R) defines the following partition of sets of states: {0}, {y}, {z}, {x, u, x + y, x + z, and the 9 remaining subsets}.
3.2. Optimized algorithm for NFA
The optimized algorithm, called HKC
in the sequel, relies on up to congruence: step 3.2
from Figure 3 becomes
Observe that we use c(R ∪ todo) rather than c(R): this allows us to skip more pairs, and this is safe since all pairs in todo will eventually be processed.
COROLLARY 2. For all X, Y ∈ P(S), X ~ Y iff HKC
(X, Y).
PROOF. Same proof as for Proposition 2, by using the invariant R c(R ∪ todo) for the loop. We deduce that R is a bisimulation up to congruence after the loop. We conclude with Theorem 2 and Proposition 1.
The most important point about these three algorithms is that they compute the states of the determinized NFA lazily. This means that only accessible states need to be computed, which is of practical importance since the determinized NFA can be exponentially large. In case of a negative answer, the three algorithms stop even before all accessible states have been explored; otherwise, if a bisimulation (possibly up-to) is found, it depends on the algorithm:
- With
Naive
, all accessible states need to be visited, by definition of bisimulation. - With
HK
, the only case where some accessible states can be avoided is when a pair (X, X) is encountered: the algorithm skips this pair so that the successors of X are not necessarily computed (this situation never happens when starting with disjoint automata). In the other cases where a pair (X, Y) is skipped, X and Y are necessarily already related with some other states in R, so that their successors will eventually be explored. - With
HKC
, accessible states are often skipped. For a simple example, let us executeHKC
on the NFA from Figure 4. After two iterations, R = {(x, u), (y + z, u)}. Since x + y c(R) u, the algorithm stops without building the states x + y and x + y + z. Similarly, in the example from the Introduction section,HKC
does not construct the four states corresponding to pairs 4, 5, and 6.
This ability of HKC
to ignore parts of the determinized NFA can bring an exponential speedup. For an example, consider the family of NFA in Figure 5, where n is an arbitrary natural number. Taken together, the states x and y are equivalent to z: they recognize the language (a + b)*(a + b)n+1. Alone, x recognizes the language (a + b)*a(a + b)n, which is known for having a minimal DFA with 2n states.
Therefore, checking x + y ~ z via minimization (as in Hopcroft12) requires exponential time, and the same holds for Naive
and HK
since all accessible states must be visited. This is not the case with HKC
, which requires only polynomial time in this example. Indeed, HKC
(x + y, z) builds the relation
where Yi = y + y1 + … + yi and Zi = z + z1 + … +zi. R‘ only contains 2n + 1 pairs and is a bisimulation up to congruence. To see this, consider the pair (x + y + x1 + y2, Z2) obtained from (x + y, z) after reading the word ba. Although this pair does not belong to R‘, it belongs to its congruence closure:
REMARK 1. In the above derivation, the use of transitivity is crucial: R’ is a bisimulation up to congruence, but not a bisimulation up to context. In fact, there exists no bisimulation up to context of linear size proving x + y ~ z.
We now discuss the exploration strategy, that is, how to choose the pair to extract from the set todo in step 3.1
. When looking for a counterexample, such a strategy has a large influence: a good heuristic can help in reaching it directly, while a bad one might lead to explore exponentially many pairs first. In contrast, the strategy does not impact much looking for an equivalence proof (when the algorithm eventually returns true). Actually, one can prove that the number of steps performed by Naive
and HK
in such a case does not depend on the strategy. This is not the case with HKC
: the strategy can induce some differences. However, we experimentally observed that breadth-first and depth-first strategies usually behave similarly on random automata. This behavior is due to the fact that we check congruence w.r.t. R ∪ todo rather than just R (step 3.2
): with this optimization, the example above is handled in polynomial time whatever the chosen strategy. In contrast, without this small optimization, it requires exponential time with a depth-first strategy.
3.3. Computing the congruence closure
For the optimized algorithm to be effective, we need a way to check whether some pairs belong to the congruence closure of a given relation (step 3.2
). We present a simple solution based on set rewriting; the key idea is to look at each pair (X, Y) in a relation R as a pair of rewriting rules:
which can be used to compute normal forms for sets of states. Indeed, by idempotence, X R Y entails X c(R) X + Y.
DEFINITION 7. Let R ⊆ P(S)2 be a relation on sets of states. We define ⊆ P(S)2 as the smallest irreflexive relation that satisfies the following rules:
LEMMA 3. For all relations R, is confluent and normalizing.
In the sequel, we denote by X↓R the normal form of a set X w.r.t. . Intuitively, the normal form of a set is the largest set of its equivalence class. Recalling the example from Figure 4, the common normal form of x + y and u can be computed as follows (R is the relation {(x, u), (y + z, u)}):
THEOREM 3. For all relations R, and for all X, Y ∈ P(S), we have X↓R = Y↓R iff (X, Y) ∈ c(R).
We actually have X↓R = Y↓R iff X ⊆ Y↓R and Y ⊆ X↓R, so that the normal forms of X and Y do not necessarily need to be fully computed in practice. Still, the worst-case complexity of this subalgorithm is quadratic in the size of the relation R (assuming we count the number of operations on sets: unions and inclusion tests).
Note that many algorithms were proposed in the literature to compute the congruence closure of a relation (see, e.g., Nelson and Oppen,18 Shostak,23 and Bachmair et al.2). However, they usually consider uninterpreted symbols or associative and commutative symbols, but not associative, commutative, and idempotent symbols, which is what we need here.
3.4. Using HKC
for checking language inclusion
For NFA, language inclusion can be reduced to language equivalence: the semantics function [−] is a semilattice homomorphism, so that for all sets of states X, Y, [X + Y] = [Y] iff [X] + [Y] = [Y] iff [X] ⊆ [Y]. Therefore, it suffices to run HKC
(X + Y, Y) to check the inclusion [X] ⊆ [Y].
In such a situation, all pairs that are eventually manipulated by HKC
have the shape (X‘ + Y‘, Y‘) for some sets X‘, Y‘. Step 3.2
of HKC
can thus be simplified. First, the pairs in the current relation only have to be used to rewrite from right to left. Second, the following lemma shows that we do not necessarily need to compute normal forms:
LEMMA 4. For all sets X, Y and for all relations R, we have X + Y c(R) Y iff X ⊆ Y↓R.
At this point, the reader might wonder whether checking the two inclusions separately is more convenient than checking the equivalence directly. This is not the case: checking the equivalence directly actually allows one to skip some pairs that cannot be skipped when reasoning by double inclusion. As an example, consider the DFA on the right of Figure 2. The relation computed by HKC
(x, u) contains only four pairs (because the fifth one follows from transitivity). Instead, the relations built by HKC
(x, x + u) and HKC
(u + x, u) would both contain five pairs: transitivity cannot be used since our relations are now oriented (from y ≤ v, z ≤ v, and z ≤ w, we cannot deduce y ≤ w). Figure 5 shows another example, where we get an exponential factor by checking the equivalence directly rather than through the two inclusions: transitivity, which is crucial to keep the relation computed by HKC
(x + y, z) small (see Remark 1), cannot be used when checking the two inclusions separately.
In a sense, the behavior of the coinduction proof method here is similar to that of standard proofs by induction, where one often has to strengthen the induction predicate to get a (nicer) proof.
Looking at the example in Figure 5, a natural idea would be to first quotient the automaton by graph isomorphism. By doing so, one would merge the states xi, yi, zi, and one would obtain the following automaton, for which checking x + y ~ z is much easier.
As shown in Abdulla et al.1 and Doyen and Raskin7 for antichain algorithms, one can do better, by exploiting any preorder contained in language inclusion. Hereafter, we show how this idea can be embedded in HKC
, resulting in an even stronger algorithm.
For the sake of clarity, we fix the preorder to be similarity,17 which can be computed in quadratic time.10
DEFINITION 8 (SIMILARITY). Similarity is the largest relation on states ⊆ S2 such that x y entails:
- o(x) ≤ o(y) and
- for all a ∈ A, x‘ ∈ S such that x x’, there exists some y‘ such that y y’ and x‘ y‘.
To exploit similarity pairs in HKC
, it suffices to notice that for any similarity pair x
y, we have x + y ~ y. Let
denote the relation {(x + y, y) | x
y}, let r‘ denote the constant-to-
function, and let c‘ = (r‘ ∪ s ∪ t ∪ u ∪ id)ω. Accordingly, we call HKC'
the algorithm obtained from HKC
(Figure 3) by replacing (X, Y) ∈ c(R ∪ todo) with (X, Y) ∈ c‘(R ∪ todo) in step 3.2
. The latter test can be reduced to rewriting thanks to Theorem 3 and the following lemma.
LEMMA 5. For all relations R, c‘(R) = c(R ∪ ).
THEOREM 4. Any bisimulation up to c’ is contained in a bisimulation.
COROLLARY 3. For all sets X, Y, X ~ Y iff HKC'
(X, Y).
4. Antichain Algorithms
Even though the problem of deciding NFA equivalence is PSPACE-complete,16 neither HKC
nor HKC'
are in PSPACE: both of them keep track of the states they explored in the determinized NFA, and there can be exponentially many such states. This also holds for HK
and for the more recent antichain algorithm25 (called AC
in the following) and its optimization (AC'
) exploiting similarity.1, 7
The latter algorithms can be explained in terms of coinductive proof techniques: we establish in Bonchi and Pous4 that they actually construct bisimulations up to context, that is, bisimulations up to congruence for which one does not exploit symmetry and transitivity.
Theoretical comparison. We compared the various algorithms in details in Bonchi and Pous.4 Their relationship is summarized in Figure 6, where an arrow X
→ Y
means that (a) Y
can explore exponentially fewer states than X
and (b) Y
can mimic X
, that is, the coinductive proof technique underlying Y
is at least as powerful as the one of X
.
In the general case, AC
needs to explore much more states than HKC
: the use of transitivity, which is missing in AC
, allows HKC
to drastically prune the exploration. For instance, to check x + y ~ z in Figure 5, HKC
only needs a linear number of states (see Remark 1), while AC
needs exponentially many states. In contrast, in the special case where one checks for the inclusion of disjoint automata, HKC
and AC
exhibit the same behavior. Indeed, HKC
cannot make use of transitivity in such a situation, as explained in Section 3.4. Things change when comparing HKC'
and AC'
: even for checking inclusion of disjoint automata, AC'
cannot always mimic HKC'
: the use of similarity tends to virtually merge states, so that HKC'
can use the up-to transitivity technique which AC'
lack.
Experimental comparison. The theoretical relationships drawn in Figure 6 are substantially confirmed by an empirical evaluation of the performance of the algorithms. Here, we only give a brief overview; see Bonchi and Pous4 for a complete description of those experiments.
We compared our OCaml implementation4 for HK
, HKC
, and HKC'
, and the libvata C++
library14 for AC
and AC'
. We use a breadth-first exploration strategy: we represent the set todo from Figure 3 as a FIFO queue. As mentioned at the end of Section 3.2, considering a depth-first strategy here does not alter the behavior of HKC
in a noticeable way.
We performed experiments using both random automata and a set of automata arising from model-checking problems.
- Random automata. We used Tabakov and Vardi’s model24 to generate 1000 random NFA with two letters and a given number of states. We executed all algorithms on these NFA, and we measured the number of processed pairs, that is, the number of required iterations (like
HKC
,AC
is a loop inside which pairs are processed). We observe thatHKC
improves overAC
by one order of magnitude, andAC
improves overHK
by two orders of magnitude. Using up-to similarity (HKC'
andAC'
) does not improve much; in fact, similarity is almost the identity relation on such random automata. The corresponding distributions forHK
,HKC
, andAC
are plotted in Figure 7, for automata with 100 states. Note that whileHKC
only improves by one order of magnitude overAC
when considering the average case, it improves by several orders of magnitude when considering the worst cases. - Model-checking automata. Abdulla et al.1, 7 used automata sequences arising from regular model-checking experiments5 to compare their algorithm (
AC'
) againstAC
. We reused these sequences to testHKC'
againstAC'
in a concrete scenario. For all those sequences, we checked the inclusions of all consecutive pairs, in both directions. The timings are given in Table 1, where we report the median values (50%), the last deciles (90%), the last percentiles (99%), and the maximum values (100%). We distinguish between the experiments for which a counterexample was found, and those for which the inclusion did hold. ForHKC'
andAC'
, we display the time required to compute similarity on a separate line: this preliminary step is shared by the two algorithms. As expected,HKC
andAC
roughly behave the same: we test inclusions of disjoint automata.HKC'
is however quite faster thanAC'
: up-to transitivity can be exploited, thanks to similarity pairs. Also note that over the 546 positive answers, 368 are obtained immediately by similarity.
5. Conclusion
Our implementation of HKC
is available online,4 together with proofs mechanized in the Coq proof assistant and an interactive applet making it possible to test the presented algorithms online, on user-provided examples.
Several notions analogous to bisimulations up to congruence can be found in the literature. For instance, self-bisimulations6, 11 have been used to obtain decidability and complexity results about context-free processes. The main difference with bisimulation up to congruence is that self-bisimulations are proof techniques for bisimilarity rather than language equivalence. Other approaches that are independent from the equivalence (like bisimilarity or language) are shown in Lenisa,15 Bartels,3 and Pous.19 These papers propose very general frameworks into which our up to congruence technique fits as a very special case. However, to our knowledge, bisimulation up to congruence has never been proposed before as a technique for proving language equivalence of NFA.
We conclude with directions for future work.
Complexity. The presented algorithms, as well as those based on antichains, have exponential complexity in the worst case while they behave rather well in practice. For instance, in Figure 7, one can notice that over a thousand random automata, very few require to explore a large amount of pairs. This suggests that an accurate analysis of the average complexity might be promising. An inherent problem comes from the difficulty to characterize the average shape of determinized NFA.24 To avoid this problem, with HKC
, we could try to focus on the properties of congruence relations. For instance, given a number of states, how long can be a sequence of (incrementally independent) pairs of sets of states whose congruence closure collapses into the full relation? (This number is an upper-bound for the size of the relations produced by HKC
.) One can find ad hoc examples where this number is exponential, but we suspect it to be rather small in average.
Model checking. The experiments summarized in Table 1 show the efficiency of our approach for regular model checking using automata on finite words. As in the case of antichains, our approach extends to automata on finite trees. We plan to implement such a generalization and link it with tools performing regular tree model-checking.
In order to face other model-checking problems, it would be useful to extend up-to techniques to automata on infinite words, or trees. Unfortunately, the determinization of these automata (the so-called Safra’s construction) does not seem suitable for exploiting neither antichains nor up to congruence. However, for some problems like LTL realizability9 that can be solved without prior determinization (the so-called Safraless approaches), antichains have been crucial in obtaining efficient procedures. We leave as future work to explore whether up-to techniques could further improve such procedures.
Acknowledgments
This work was partially funded by the PiCoq (ANR-10-BLAN-0305) and PACE (ANR-12IS02001) projects.
Figures
Figure 1. Naive algorithm for checking the equivalence of states x and y of a DFA (S, o, t). The code of HK
(x, y) is obtained by replacing the test in step 3.2
with (x’, y’) ∈ e(R).
Figure 2. Checking for DFA equivalence.
Figure 3. On-the-fly naive algorithm, for checking the equivalence of sets of states X and Y of an NFA (S, o, t). HK
(X, Y) is obtained by replacing the test in step 3.2
with (X‘, Y‘) ∈ e(R), and HKC
(X, Y) is obtained by replacing it with (X‘, Y‘) ∈ c(R ∪ todo).
Figure 4. A bisimulation up to congruence.
Figure 5. Family of examples where HKC
exponentially improves over AC and HK; we have x + y ~ z.
Figure 6. Relationships among the algorithms.
Figure 7. Distributions of the number of processed pairs, for 1000 experiments with random NFA.
Figure. Watch the authors discuss this work in this exclusive Communications video.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment