One hundred and fifty years ago if you wanted a piece of information you most likely had to travel some distance to get it. Perhaps it was held in your own library, or perhaps a library nearby. The more obscure the information, the harder it would be for you to retrieve. You might have to travel to a large city or even to another country. You could write a letter to someone who you thought might have the information you needed. Retrieving information could take quite a bit of time, and might not even be possible to achieve.
Fifty years ago it became easier. Information could be retrieved via telegram or teletype. You could ask someone on the phone. You could watch a TV show or listen to radio and hope that the information you desired would be broadcast. Libraries expanded all over the globe, providing easier access.
Ten years ago you could type in your request and have access to global information nearly instantaneously. And a few years later that technology could sit in your pocket and you could access it on the train. The only true limitation today is the noise involved in how that massive amount of information is sorted and processed.
How long will it take for us to have access to the Internet in our brains?
Singularity, in the current popular form, describes the point where exponential technology growth becomes explosive, transforming our entire society into something new. Technology grew throughout the twentieth century. But look at how much has changed in the just the last 20 years? How long will it take for our technology to surpass human levels of intelligence and start building new intelligence?
Will the human era end and a machine era begin? Is the process hurtling forward faster than we can comprehend? Could we see the rise of transhuman machine-organic hybrids or entirely machine based intelligence in the coming decades? Many futurists point in this direction. Technological singularity was proposed in one form by mathematician I.J. Good:
“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”
Science fiction writer Vernor Vinge refined the concept in a 1993 essay entitled “The Coming Technological Singularity.” Vinge focuses on the point at which technology is growing so quickly that we cannot predict the future and that the future will be a world very much different than the one we know today.
One prominent author of singularity speculation is Ray Kurzweil, who has turned the singularity idea into a following with books over the last decade, including the 2005 book The Singularity is Near: When Humans Transcend Biology.
Kurzweil has wrapped the concept into his health obsession. Will advances in technology lead to the extension of human life? Of course it already has - medical science is developing new technologies as we speak. Machine technology acts in concert with the human body to keep people alive. Today a pacemaker seems like a rather rudimentary device. What will the future hold? How far will this go? Kurzweil makes the argument that it will ultimately transform human life as we know it.
Singularity is an important consideration in the area of extraterrestrial intelligence. Many SETI researchers and physicists say they would expect that if we ever contact an extraterrestrial civilization it will be represented by machine intelligence. For that matter the civilization may actually be machine based intelligence. Is it possible that this transition from organic based intelligence to machine intelligence is a natural progression for intelligent life? If so, what would the visiting alien machines consider when evaluating our civilization? I have talked about the advent of the Internet as a milestone for human development. As viewed by an extraterrestrial it could be seen as a sign that we are ready for First Contact. Perhaps though, the real development would be the singularity, the point at which our civilization moves from organic life to machine based intelligence. The transition could be the marker at which a machine based extraterrestrial would be ready to contact life on Earth.
A machine intelligence based First Contact scenario could go many different ways. Perhaps an alien civilization merely uses machine technology to do the exploring? It makes sense to use machine probes- they can travel the vast expanses of space and not need to worry about sustaining organic life. They could replicate and cover a much wider area. We already use artificial intelligence in our exploration of the solar system. The Opportunity rover on Mars has been updated firmware called AEGIS, which stands for “Autonomous Exploration for Gathering Increased Science”.
It’s form of elementary artificial intelligence. We have been using such programs to direct space exploration probes for more than a decade. If an alien civilization used such a probe what would be the primary objective? Could we engage in real dialogue? What would a probe be able to tell us about the parent civilization?
Machine intelligence of a higher order involves beings inhabiting machine based bodies. This could be a mind inhabiting a robot body of some sort or it could be an entirely new form of life, where organic and machine life has merged.
Kurzweil is optimistic that transcending biological limitations in favor of machine based life will have positive impact on our society and even our planet. He envisions a world where many of the world problems are solved thanks to the explosion of technology. One can also imagine the downside of such transformation. At what point do we risk losing our soul? At what point do we cease to be human? And are these steps we wish to take?
Communication with extraterrestrial machine based intelligence brings many concerns. Would we be able to fully understand the motives or actions of such an entity? Would the concept be too radical for the majority of humans to accept? Would we feel threatened? You can say that any contact with an extraterrestrial civilization raises similar issues. However, I think as humans we tend to put more trust in what we consider “natural” rather than the machines we have created. This may be the archaic thinking of an old world or it may be a very real issue that we need to confront with First Contact.
Kurzweil doesn’t seem very concerned about alien intelligence, instead proposing that human intelligence will propagate, creating an awakening of the universe. But what if such a wave of change is already coming at us from an alien civilization?