Home/Magazine Archive/October 2014 (Vol. 57, No. 10)/Dissection: A New Paradigm For Solving Bicomposite.../Abstract

Research highlights
## Dissection: A New Paradigm For Solving Bicomposite Search Problems

Combinatorial search problems are usually described by a collection of possible states, a list of possible actions which map each current state into some next state, and a pair of initial and final states. The algorithmic problem is to find a sequence of actions which maps the given initial state into the desired final state. In this paper, we introduce the new notion of *bicomposite search problems*, and show that they can be solved with improved combinations of time and space complexities by using a new algorithmic paradigm called *dissection*. To demonstrate the broad applicability of our new paradigm, we show how to use it in order to untangle Rubik's cube and to solve a typical NP-complete partition problem with algorithms which are better than any previously described algorithm for these problems.

A central problem in the design of efficient algorithms is how to solve search problems, in which we are given a pair of states and a collection of possible actions, and we are asked to find how to get from the first state to the second state by performing some sequence of actions. In some cases, we only want to decide whether such a sequence exists at all, while in other cases it is clear that such sequences exist but we are asked to find the shortest possible sequence. Many search problems of this type have associated decision problems which are NP-complete, and thus we do not expect to find any polynomial time algorithms which can solve all their instances. However, what we hope to find are new exponential time algorithms whose exponents are smaller than in the best previously known algorithms. For example, the problem of breaking a cryptographic scheme whose key has *n* = 100 unknown bits cannot be solved in a practical amount of time via an exhaustive key search algorithm, since its time complexity of 2^{n} would be beyond reach even for the largest currently available data center. However, if we manage to find a better cryptanalytic attack whose running time is 2^{n/2}, we can break the scheme with a modest effort in spite of the exponential nature of this complexity function. One trick which is often helpful in such situations is to find a tradeoff between the time and space complexities of the attack: Exhaustive search requires a lot of time but a negligible amount of memory, and thus a tradeoff which uses more memory (in the form of large tables of precomputed values) in order to reduce the time (by skipping many computational steps) will be very beneficial. For reasons which are explained in the extended version of this paper (available in Dinur et al.^{2}), we usually consider the product of the amount of time and the amount of space required by the algorithm as the appropriate complexity measure that we try to minimize. In the example above, breaking the cryptosystem with *T* = 2^{n} time and an *S* = 1 space is infeasible, breaking it with *T* = 2^{2n/3} time and *S* = 2^{n/3} space (whose product *TS* = 2^{n} is the same as before) is better but still barely feasible, and breaking it in *T* = 2^{n/2} time and *S* = 2^{n/4} space (whose product *TS* = 2^{3n/4} has a smaller exponent) is completely feasible.

No entries found