Science fiction, especially in movies, often portrays heroes and villains. In fact, most all of humanity’s collective fiction uses some form of bi-moral model, pitting protagonists and antagonists against each other.
And, though the case for which side should win not always defined (such as in the case of the “anti-hero”, for example) most all of us will generally fall in line with the goals, outlook, and values of the protagonist.
Take that thought and let it sit, especially for later.
Now, as humanity works closer to developing faster and more intelligent artificial intelligence, one thing I, and many philosophers and ethicists, believe is that modeling future circumstances is imperative to the responsible development of this technology.
We already see it all around us—artificially-generated algorithms bring us our content, advertising, and tell us what kind of style we should fit into. Our dating programs like Tinder, Bumble, and OKCupid are working more closely with algorithmic thinking than ever before.
The immediate threat we face in the above circumstances is being put into a type of “echo-chamber”, where all we see and experience on the internet is catered to us, in my opinion making us less and less capable of empathy with individuals who possess divergent realities from our own.
Now cut to more far-reaching points in the future, explored by such minds as philosophers Sam Harris and Nick Bostrom (especially in his book, Superintelligence). As I’ve mentioned in earlier posts, I’m reading Do Androids Dream of Electric Sheep by Phillip K. Dick, a book which takes place in a world where androids have become intelligent enough to disobey orders and design in order to reach Earth.
I won’t spoil the story if you haven’t read it (which everyone should, it’s a very short and concise read), but it’s made me think about and explore certain points of our society that I believe require closer inspection.
In Harris’ TED Talk, he explores what happens when we finally develop “superintelligent” AI, and what the implications of such a development could have on humanity. More specifically, he walks us through the idea that when artificial intelligence systems can go from being improved by humans literally improving themselves, all certainty of humanity’s survival immediately goes out the door.
Harris uses a very interesting analog to make this point. He considers when we come in contact with ants—simply put, there are many times we try to avoid destroying them because of either a general queasiness in killing something, or from some kind of empathy for all living things.
Yet, everyone reading this can attest to times when they found ants in their food, and without flinching completely obliterate them. Or, how we gas cockroaches and lay traps for mice: the list can go on and on.
This is because these organisms enter into a space in our consciousness where they come in conflict with our goals in some way or another. Because they are weaker than us, we have almost no issue destroying them.
Once an artificial intelligence system can improve itself, it has the speed of computing at that time as its measure of capability. We know that human thinking is exponentially slower than that of even our most basic circuits, by some estimates currently at least 20,000 times so. A question becomes very apparent at this point, one which I believe many have not thought about:
“Why would an artificial intelligence system which can improve itself want, desire, or need to stay within the boundaries of human control?”
In a true nightmare scenario, as Harris explores, the artificial intelligence system—whether in humanoid or box-computer form—could very well use the same logic and empathic response we employ for pests in our home eating into our box of cereal.
Media and Harris’ “Failure of Intuition”
We see examples of it in our media all the time, but I believe there is a failure to make the connection between fiction and possible realities.
Take 2001: A Space Odyssey as a prime example. The HAL 9000 (Heuristically programmed ALgorithmic computer) was designed to learn and aid a crew of scientists on a mission to investigate a monolithic structure on one of Jupiter’s moons. Yet, the iconic quote “I’m sorry Dave, I’m afraid I can’t do that” demonstrates the above point—when the artificial intelligence system sees human intervention as contrary to its own goals, value sets diverge and the only solution is to act independently, and in this case with extreme prejudice.
This is where the subtlety of our predicament begins. In films like Terminator and others, the protagonist and antagonist roles are clearly defined—it’s us against the machines. But, the conflict right now isn’t necessarily an “us vs. them” dichotomy, both sides lie within ourselves.
The conflict arises between our desire to progress ever further versus our ability to foresee the consequences of progress essentially progressing itself.
So where do we go from here? Harris explains that aspects of our reality, such as famine, are “not fun”. It’s difficult, if not impossible to portray famine as anything but negative in our collective canon of storytelling.
I suggest we as a civilization begin to explore more channels of storytelling that break through this “us vs. them” mentality, to look more closely to the “us vs. us” model, which we have done before.
Nuclear weapons, biological agents, poverty, and many other examples have brought humanity to the question of “why go further?” I argue, as many others do, that we treat artificial intelligence with a more complex narrative approach—our existence, and the realities of future generations, may depend on it.