Architecture and Hardware

Embracing Noise or Why Computer Scientists Should Stop Worrying and Learn to Love the Errors

Geeky Ventures Founder Greg Linden
When people talk about differences between computers and the brain, they often emphasize the massively parallel processing of the brain.  There are hundreds of billions of neurons in the brain firing at a relatively slow rate compared to a just a few but very fast processors in a modern computer.
There is another very important difference between the brain and computers, tolerance of error.  Computers are derailed if even a couple bits are flipped during a computation.  Computers expect everything to be perfect.  Error recovery is an afterthought, a protection against what is expected to be a rare event of a data read error, and perfection is assumed.
By contrast, our wetware embraces error.  The brain is a cacophony, a battle of competing patterns, from which understanding emerges.  It is a system built on noise, one that expects constant error and recovery from error.  Brain abandons precision and thrives on approximation.
There are examples of embracing noise and error in computing.  Machine translation has made great strides by building likely translation rules from patterns in large amounts of data, then expecting, welcoming, and correcting frequent errors in the translated text.  Recommender systems make no assumptions that they are perfect, but only that they might be helpful, and only expect to be judged by how useful they are compared to alternatives for lists of movies or books you might like.  Web search spreads a query across hundreds or thousands of machines and simply ignores those that fail to respond quickly, relying on getting a good enough answer out quickly rather than getting a complete answer out slowly.
Often we find estimates are good enough.  Recent work on data mining in large clusters, such as Hadoop, has found orders of magnitude speed ups can be gained if only an approximate answer — such as +-1% with 95% probability — is required.  Some computer security researchers are embracing the idea classifying what is malware is not black or white, but a gray area, and starting to use techniques such as slowing suspicious processes rather than expecting to be able to identify and kill all wrongdoers.  
Of the lessons we take from biological systems, one should be that wetware is sloppy.  And that is okay.  Precision is not required in everything or even most things.  Failures are best handled by expecting them all the time, not treating them as exceptions.  We should expect errors and handle them routinely.  We should stop worrying and learn to love the noise.


Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More