Artificial intelligence (AI) could be a big issue in extraterrestrial
First Contact. The first aliens we meet may be mechanical creations controlled
by artificial intelligence. Thus far on this blog I have considered primarily
the alien side of artificial intelligence: but what of human AI developments?
Tad Friend has an excellent piece in the New Yorker that explores worries over
the rise of human designed AI.
Researchers use three primary terms to describe human AI:
-Artificial Narrow Intelligence (ANI) is the term used to
describe technology such as Apple’s Siri and self-driving cars. We have this in
varying degrees of intricacy now.
-Artificial General Intelligence (AGI) is defined as being
at the level of human intelligence. It is the goal of many companies and
institutions currently working in the field. It is generally agreed that AGI
has not yet been accomplished, although the boundaries are being pushed daily.
-The ultimate level of AI is called Artificial
Superintelliegnce (ASI). It would be intelligence beyond any human
capabilities, even for the most gifted of humans.
Defining AI based on human intelligence could be tough in
the future, especially if we find ways to boost our own intelligence
artificially. One could argue that your smartphone is a first step for that
intelligence boost. What happens when your smartphone is directly connected to
your brain? Of course AI researchers would say that access to information is
not intelligence. Intelligence in the narrow definition is the ability to solve
problems. Some would argue that a true test is the ability to understand
shadings of meaning in language and use imagination to solve problems. Our
decision to base AI levels on a comparison to human intelligence is perhaps a
symptom of our anthropocentric thinking. However, it is also the only measure
we currently have. Other definitions are not human based, but rather scale
based- weak AI versus strong AI. This of course, would create problems as
technology progresses. What is strong AI one day could be weak a few years
later.
There is further debate with the suggestion by pioneering AI
researcher Judea Pearl that we not use reasoning by association to rate AI
(simply looking for correlations in data) but rather causal reasoning;
inquiring how causal relationships would be altered if there is
intervention. Kevin Hartnett has a story
explaining this in the Atlantic Monthly.
So, let’s get back to the aliens. I have said before that
there could be two basic types of alien AI machines exploring the universe:
Biologically Originated Intelligence (BOI) and Artificially Originated
Intelligence (AOI). The difference is simple: did biological beings create the
intelligent robots cruising through space or did other artificial intelligence
create those mechanical explorers? The answer has big implications for
humanity. We worry about the rise of AGI and ultimately ASI. Will humans become
extinct? Will we morph into increasingly mechanical beings? Will ASI decide to
get rid of us or perhaps leave us behind to explore the universe while we struggle
here on Earth? Those questions are far-fetched considering our current level of
technology. But the concern it would raise in First Contact could have a direct
impact on our relationship with alien AI. We could well understand a
sophisticated alien probe controlled by biological creatures. However, a probe
with AGI or ASI capabilities would be a concern. There would be an inherent
threat involved in any alien machine visiting our solar system. The worries
would increase as a direct correlation with the differences exhibited by that
visiting alien life form. A big question could be the relationship between the
original alien biological creatures and their created AI. Do they exist
together in harmony? Or did the AI grow to supplant the biological beings? If
the later is the case - that would likely create a great deal of concern among
humans. We could find ourselves with some major issues to consider, ranging
from what kind of contact we would want to have with alien AI to how much
further we want to go with the development of human created AI. One could
imagine quite a bit of angst on the part of humans. Certainly it would be a
tough way to start a relationship.
And perhaps that is the reason aliens have not said hello
yet: they are in fact advanced AI and don’t know if we can handle the idea or
the threat. Alien AI might be better off waiting until we are further along on
the evolutionary scale, if that is indeed where it leads.
I think the best message to humans under such a First
Contact scenario would be this: we don’t have to follow the historical path of
aliens. We are early enough in our AI development to choose a different road,
perhaps with more closely controlled AI. Before freaking out we should
carefully study alien history. It could show that the biological creatures
moved willingly, over time, to increasingly mechanical based bodies. Perhaps AI
merely assisted the biological intelligence until the two became
indistinguishable? We would certainly want to request a timeline of alien
history as part of our initial contact.
In the meantime, we need to keep using that human
imagination. We can create more fictional stories that explore these issues. It
is our best way to conceptualize such matters. One can scoff at books and
movies as mere entertainment, but when the idea behind such stories has weight,
and the ideas are thoughtful, it may be our best way of considering how we want
to proceed with AI.
Photo by Riccardo Annandale on Unsplash
No comments:
Post a Comment