Near the end of January 2014 I was privileged to participate in a conference organized by the Internet Society's chapter on Interplanetary Communication (www.ipnsig.org). The primary topic of discussion dealt with the challenges of deep space communication where the speed of light becomes an issue. Even within the solar system, we are confronted by one-way transmission delays measuring minutes to hours. Pluto is about eight hours away. In the course of the day, we learned about protocols that deal with variable delay and disruption for both deep space and even terrestrial communication. In the latter case, the issues are more about loss of connectivity, periodic or random disruption, and uncertainty brought about by store-and-forward operation in which each hop may take an arbitrarily long time.
We learned about application experiments to provide the Sami reindeer herders with communication in the far north using data mules (all-terrain vehicles) that carry information, drop it off in town, pick up any new information, and carry it to other towns in the area. Sounds a bit like USENET and UUCP, doesn't it?
One of the participants was an invited guest speaker and noted science fiction writer, David Brin, whose many works have been a source of inspiration and unexpected challenge to me and many readers around the world. In his talk, Brin reminded us about the famous Drake Equation, which is not about ducks but about estimates of intelligent civilizations in our galaxy (see http://en.wikipedia.org/wiki/Drake_equation).
There are a lot of parameters, most of which we do not really know how to set. The output of the equation might range from one to a very large number depending on how the other parameter values are set. Of course, one could argue that the lower bound should be 0 (zero) based on uncertainty whether Earth's civilization(s) shows signs of intelligence. I have always thought the famous Search for Extraterrestrial Intelligence (SETI; http://www.seti.org/) was started because we had not found any intelligence here on Earth!
There is a famous Fermi Paradox derived from the Drake Equation, namely, if there are intelligent civilizations in our galaxy, why have we not yet detected any evidence of them? Brin had a particularly scary answer to Fermi's question. What if we are the ones who are supposed to light the galaxy? What if our species is destined to spread outward from Earth to populate the galaxy? I have to say, that question ran some chills up and down my spine. My first thought was "What if we don't last long enough to develop the capacity to achieve that goal?" What if we just blow the whole mission, mess up the planet, and destine our species to oblivion? Holy Moley, what if he is right about that?
I cannot speak for anyone else, but I think I would think somewhat differently about a lot of things. I would be thinking more long-term and be worried about the sustainability of our planet and the species that inhabit it. I would wonder what we should be developing to fulfill this mission. What technologies do we need to expand beyond our planet and our solar system? How should we prepare ourselves for such an ultimate goal?
What if we are the ones who are supposed to light the galaxy?
There is another side to this, of course. What if, in the process of preparing ourselves to send some of our species to other systems, we encounter a similarly inclined species? How would we communicate with them? What common experience would inform our attempts to communicate? It even occurred to me this question might well be asked in contemplating communicating with our distant descendants. What Rosetta stone should we prepare? What would we want them to know about us, assuming they might struggle with this same titanic question? We might even begin by exploring our ability to communicate with other apparently sentient species here on Earth: dolphins, hominids, elephants, and others.
Can our growing skill in the use of computers help us here? Is artificial intelligence an important milestone on the path toward populating the rest of the galaxy? Are our descendants destined to be silicon analogues of human beings?
That is what I like about David Brin and science fiction in general: a lot of it makes you really think.
Vinton G. Cerf, ACM PRESIDENT
The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.
I must admit Dr. Cerf, that when I saw the title of this column my immediate reaction was - in the immortal words of Pogo - "I have met the enemy and it is us!"
I suspect that even without the challenge posed by Mr. Brin, our goal as a species - regardless of whether there are others in the universe or not - is to be a bright light. The alternative is to to be what we are - and that leads to a dismal and arguably doomed future. It should make no difference whether there are or aren't other intelligent life forms anywhere else.
"Silicon analogies" are not human beings. They may be the product of our minds, at least until we have succeeded in programming them to be intelligent enough to "procreate" themselves but they are not human. We may one day be able to write programs that will allow a computer to write sonnets or love songs - do you really believe we will ever be able to write the program that conveys sentiment/feeling/concern? Those are the qualities that make us human.
And if intelligence is going to be measured by how closely the computer can mimic human intelligence, then we are in trouble all over again.
For a significant sharpening of the Fermi paradox using known facts about rocket energy efficiency, the density of intergalactic space dust, etc., see this paper:
"What would we want them to know about us, assuming they might struggle with this same titanic question?"
The Arecibo message is maybe a starting point.
Bruce Cohen's points have captured the attention of many over the years and likely millennia, setting aside specific questions about computers and intelligence. A case can be made that biological forms are in some sense biochemical "computers" so that intelligence of the human sort need not be confined only to biochemical beings. It is interesting to note that the recent Nobel prize in chemistry went to three scientists whose keystone work was done using biocomputing mechanisms rather than "wet laboratories." I don't know about you, dear readers, but I am feeling a bit humble about making any absolute statements about the potential of artificial intelligence, however it is derived. At the least, I think it is arguable that even present-day computers can do things that we biochemical beings cannot so at the least there may be a power partnership to look forward to.
We've, I mean our literature, has always assumed that were are not the first; that there are more advanced versions of ourselves out there...maybe some on their way here. How will they treat us? How will we treat them? Look at our own history which is rife with examples.
The odds are that if they reach us then we can expect to be conquerored at worst, or subjugated and exploited at best. The extrema are total wipe out of us like we disinfect a house, or the Prime Directive of Non-Interference until we mature.
I have an answer for Dr. Fermi and his seeming paradox "Where is everybody?"
The answer is called Jones' First Law. Back in the 1980s, I created a set of laws like Asimov and Clark. The First Law states, "Given a problem to solve or decision to make and mankind invariably chooses the 'Easy Way' out even though it is not always the right way." The 'Easy Way' can be qualified by rating 'Ease-of-Use', Effort expended achieving a goal, Efficiency, Effectiveness, Efficacy. The law can be expressed as a Binomial Expression with probability parameters for each variable.
Displaying all 5 comments