The theory of modern computing predates by a few years the modern computer itself. In 1936, while studying for his Ph.D. at Princeton University, the British mathematician Alan M. Turing devised an idealized machine that, by executing a series of instructions and storing data, could perform any imaginable computational algorithm. Further work by Turing and others pointed to an intimate connection between computability and calculability in a more conventional sense: If the solution to a problem can be written in the form of a finite mathematical recipe, then it is computable on a Turing machine.
But among computable problems, some (as George Orwell might have put it) are more computable than others. A Turing machine can avail itself of a limitless amount of memory and take as long as it needs to produce an answer. Real computers, on the other hand, have finite storage, and theoretical computability isn’t of much practical comfort for an algorithm that takes a long time—longer than the age of the universe, say—to produce an answer. Encryption methods that rely on the difficulty of finding the prime factors of a "key"—a very large number—depend on precisely that point. A guaranteed factorization algorithm is the bruteforce strategy of testing all the smaller numbers in sequence until you find an exact divisor, but the size of that task increases exponentially with the number of digits in the key. For large keys, factorization is theoretically feasible but impossible in practice.
Complexity theory arose in the 1960s as a way to classify the hardness of problems. Easy problems belong to complexity class P, for polynomial time, meaning that algorithms exist to produce an answer in a time that rises no faster than some power of the size of the input. Many everyday computational tasks, such as list sorting and optimization by linear programming, belong to P. A wide variety of more demanding problems do not seem to fall into P, however, but belong to a class labeled NP. The distinguishing characteristic of these problems is that the validity of a proposed solution can be checked in polynomial time, even though there isn’t an algorithm for generating that solution in the first place. Factorization falls into this class; if you’re presented with two numbers purporting to be the factors of a larger number, it’s easy to check the truth of the assertion.
An alternative definition of NP problems is that they can be solved in polynomial time on what’s called a nondeterministic Turing machine. This hypothetical machine branches to different computational paths at every step, giving rise to an exponentially growing tree of computations. If one path through that tree arrives at the desired result, the machine has solved the problem. For example, a nondeterministic Turing machine could perform factorization in polynomial time by testing the exponentially large number of possible divisors.
Even within NP, though, there are degrees of difficulty. The most recalcitrant form a subclass named NP-complete, the salient property of which is that if you could find an efficient (that is, polynomial time) solution to any NP-complete problem, you would in effect have found an efficient solution to all NP problems. That’s because there are efficient algorithms that can turn any NP problem into a particular instance of an NP-complete problem. Therefore, a general solution to the NP-complete problem automatically includes solutions to NP problems related to it, but a solution to the NP problem solves only some cases of the NP-complete problem.
As his "new favorite example" of an NP-complete problem, Avi Wigderson, a professor in the school of mathematics at the Institute of Advanced Study in Princeton, NJ, offers the familiar Sudoku puzzle. That might come as a surprise to anyone who has solved countless newspaper Sudokus in the time it takes to drink a mug or two of coffee, but Wigderson emphasizes that he is not talking about just the standard 9×9 grids. Sudokus can be constructed on any doubly square grid—16×16, 25×25, and so on—and the crucial point is that although many Sudokus may be easy to solve, no algorithm exists to solve an arbitrarily large Sudoku in polynomial time.
Showing that a problem is NP-complete means proving that no known algorithm can solve it in polynomial time. But does that mean no such algorithm exists, or that no one has invented one yet? Is the class NP, in other words, truly different from P? So fundamental is this question that it has the distinction of being one of the seven Millennium Problems whose solution the Clay Mathematics Institute in Cambridge, MA, has deemed worthy of a million-dollar prize. The majority opinion among mathematicians and computer scientists is that NP is not identical to P, says Sanjeev Arora, a professor of computer science at Princeton University, which would mean that NP problems are genuinely hard and that in all likelihood no easy method for factorization exists. However, no proof currently exists.
Computational Complexity
Complexity theory has become immensely sophisticated, to the point that there are now close to 500 distinct complexity classes, distinguished by mathematical niceties of the methods that will solve them. This classification is far more than an abstract exercise, however. Understanding a task’s computational complexity can provide a novel perspective on real-world processes in which a task shows up.
One "hard" problem not in NP is the computation of so-called Nash equilibria, which arise in game theory and economics. A Nash equilibrium exists where each participant in a multi-player game is using an optimal strategy, taking into account the strategies used by the other players. That is, no single player can make a unilateral change that will improve his or her outcome. It’s recently been proved that the computation of Nash equilibria is complete for its class, known as PPAD, which means that a way to compute them efficiently would also solve other hard problems.
Complexity theory has become sophisticated, to the point there are now close to 500 distinct complexity classes.
Nash equilibria crop up in the study of distributed decisionmaking, where parts of a system act as independent agents that determine the behavior of the system as a whole. As such, they are of great interest to economists, but their computational intractability raises the question of whether and how real markets actually compute Nash equilibria, or whether the types of Nash equilibria that arise in economics are of a simpler and more tractable kind.
These notions of hardness rest on the assumption that the Turing machine is a universal model for all methods of computation. That seemed like a safe bet until quantum computing came along. The bits in a quantum computer, known as qubits, can exist in superpositions of many states at once, as the strange rules of quantum mechanics allow. The operation on a single qubit can, in effect, perform many computations at once, and in 1994, MIT mathematics professor Peter Shor made use of this property to devise a quantum computer algorithm for factoring numbers in polynomial time.
It remains uncertain whether it is feasible to build quantum computers of sufficient size to do interesting factorizations. Moreover, Wigderson says, a quantum computer is not a nondeterministic Turing machine, because purposeful manipulations of qubits are not equivalent to unrestricted exponential branching. Shor’s factorization algorithm, which relies on certain properties of prime numbers in order to select the desired answer from the multiplicity of superposed computations, may be a special case. "There’s no hint that quantum computers can solve any NP-complete problem," Wigderson adds.
Beyond its significance for computation in the formal sense, Wigderson says that complexity theory sheds light on our intuitive idea of what it means to call a problem hard. He suggests that the characteristics of a NP problem—a solution cannot be obtained by any routine method, but when found is easily checked—are reminiscent of what happens when a scientist hits on a theoretical innovation that explains some mystifying experimental fact or when a sculptor suddenly grasps how to complete a work of art. Both experience an "aha!" moment that tells them they’ve found what they want, even if they can’t explain where it came from.
Wigderson regards this analogy as more than a clever metaphor. If the brain is a computer—a fantastically complex computer, but a computer nonetheless—then leaps of intuition, creativity, and aesthetic judgments must ultimately be the results of reasonably efficient computations. Understanding these mysterious processes in algorithmic terms remains a far-off goal, but computer scientists, neuroscientists, and evolutionary biologists are beginning to use the notions of computational complexity and intractability to understand how humans and other animals process information and make decisions.
For example, Adi Livnat, a Miller Postdoctoral Fellow at the University of California at Berkeley, and Nicholas Pippenger, a mathematics professor at Harvey Mudd College, have developed a model that uses a sort of inverse of the Nash equilibrium to define conflict in neural systems trying to deal with warring impulses. An animal that has to endure electric shocks in order to obtain food, for instance, must balance desire for food against fear of physical harm. Regarding these two urges as separate agents within the animal’s brain, conflict arises, Livnat says, because each agent tells the other, "I want you to do something else."
Evolutionary theorists have argued that the brain ought to develop systems to resolve warring impulses in an orderly manner, yet the fact that humans and other animals can be rendered helpless by indecision suggests this isn’t so. Livnat and Pippenger mathematically demonstrated that conflicting agents in the brain, each independently pursuing its own goal, can nevertheless contrive to produce a beneficial compromise. Crucial to their model was that computing resources were limited: in that case, it turns out that internal conflict can produce an adequate solution where a perfect solution is beyond the system’s computational capacity.
In general, however, understanding neural systems from a computational perspective is a formidable task. As Luis von Ahn, a professor of computer science at Carnegie Mellon University, points out, many of the tasks that computers can’t do are so easy for humans that we don’t think of them as requiring mental effort—recognizing the letter A no matter what style it is written in, for example, or being asked to say whether a picture has a cat in it.
As an empirical way to explore the types of computation that are involved in processes such as image perception, researchers have built neural networks or other adaptable computer systems that can "learn" from a set of training examples. While these efforts can produce improvements in task performance, understanding that improvement in direct algorithmic terms is difficult. Training tends to produce systems that are "completely incomprehensible" in algorithmic terms, says von Ahn. Knowing their structure doesn’t mean you know what they’re doing.
It’s far from clear, adds neuroscientist Larry Abbott of Columbia University, that knowing how computers cope with a task like pattern recognition would necessarily help in understanding how the brain does it. Abbott doesn’t doubt that the human brain works in an essentially algorithmic way, but "the hardware is very different." Compared to a computer, he says, the brain is "slow and unreliable," yet it seems far more internally active in that it constantly tries out different ways of processing information rather than sticking to a preordained routine.
Although some philosophers and other researchers disagree, computer scientists and neuroscientists almost universally believe the brain does what does by performing extraordinarily elaborate computations. Millions of years of evolution have trained neural systems in a more or less ad hoc fashion to perform numerous tasks. Neuroscientists are trying to tease out the essential features of these systems by comparing them to computational models and algorithms. The day might come, says Abbott, when it is possible to understand an organism’s behavior in terms of its neural circuitry.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment