The terminators from Skynet, the agents of the Matrix, the Decepticons…Hollywood has done a good job portraying artificial intelligence (AI) as an existential threat to the human race. The scary thing is that this idea may not be purely science fiction. In fact, many leading technologists today seem to share the concern that at some point in the not-too-distant future, human kind could be beholden to super-intelligent computer overlords. That is a scary thing to think about, and even if the worse does not come to pass, AI will certainly impact everyone’s life in some form or another. So let’s look at some history of AI, its current state, and potential risks and possible outcomes.
When I was younger I was fascinated by the story of the chess playing Turk. The Turk was a turban wearing mechanical figure (I imagined it looked like Zoltar from the movie Big) that was built onto a large cabinet topped with a chess set back in the 1770’s. This mechanical Turk was not only able to move chess pieces on the board, but it actually beat most of the players that challenged it. In fact, the Turk counted Benjamin Franklin and Napoleon Bonaparte among its many victims. During its touring of Europe and the United States, most observers knew that there had to be some sort of trick but no one could accurately describe how it worked. It turned out it there was a chess master hiding in the box, operating this machine, but the deception was so good that people began to wonder….what if the Turk is real? While a very early example of man’s contemplation of the possibility of intelligent machines, the Turk was by no means the first. This shows us simply that the idea of artificial intelligence goes back a very long time.
Fast forward to May 1997 and IBM’s Deep Blue computer program defeats Gary Kasparov to become the first computer program to win against a reigning chess champion. Deep Blue’s victory was the realization of what the Turk was masquerading as 220 years earlier. This time, however, there was no one hiding in the box. The computers have only gotten stronger since then and have conquered numerous other arenas formally thought to be the domain of human thought and cognition.
2011 – IBM’s Watson defeats former champion Ken Jennings in Jeopordy! showing that computers can be programmed to understand many complex idiosyncrasies of human speech and knowledge.
2016 – Google’s AlphaGo engine defeats top-rated Go player Lee Sedol to become first computer program to beat a top-rated Go player. The ancient Chinese game of Go was considered one of the final realms of human dominance due to its abstract strategies and exponentially higher number of possible moves and game variations.
It is predicted that within 10 years (some say within 5) that a computer will be able to defeat the Turing test. Alan Turing (most recently portrayed by Benedict Cumberbatch in The Imitation Game) was a mathematician, code-breaker, and pioneering computer scientist, who also happened to be one of the leading thinkers in regards to AI. His test was simply that a machine could be said to be intelligent, if a human interrogator could not tell it apart from a human being in conversation. At the moment, no one would ever mistake Siri for a human being, but over time, as Siri records millions more requests and learns more about us and our quirks, that may well change.
Artificial Intelligence is something that every major technology firm is working on right now, with the most popular of these efforts coming in the form of voice-activated assistants on our smart-phones and computers. Siri, Alexa, Cortana, and “hey google”… all of these are efforts to make our lives easier by being able to recognize requests and provide appropriate and helpful responses. Over the next few years these will continue to improve in accuracy, scope, and utility.
Pretty soon Siri will be able to recognize symptoms or changes in behavior and deliver accurate medical diagnoses without any prompting from you at all. These developments are undoubtedly positive and there will be many of these types of capabilities on the horizon.
Today it seems as if the machines are taking over even the more mundane aspects of human life – more self-checkout lanes at the grocery store, the advent of so called “robo-advisors” giving software-generated investment advice, and smart meter technology automatically controlling and monitoring energy usage of homes are all examples. So what are the reasonable conclusions of this invasion of robots into our lives?
Honestly, it should be for the better, but there are certainly risks.
In his book, Superintelligence, Oxford professor Nick Bostrom argues that a self-learning computer intellect could take off so fast that humans would not be able to adjust. If there are not safe-guards in place at the outset, it could spell disaster. Even a seemingly innocuous program designed to, for instance, optimize factory efficiency and output, if becoming super-intelligent, could pose a threat. If this program decided that in order to accomplish its mission it must amass resources at all costs, and if destroying the human race allowed it to amass resources easier, we would be in trouble. If you think this sounds like the mad ravings of just one geek, many other serious, intelligent, technologists agree that this is a serious concern. Stephen Hawking, Elon Musk, and Bill Gates, all site the rise of artificial intelligence as one of the most serious and immediate potential threats to human existence.
The main risks associated with AI are as follows:
Machine intelligence has no moral compass (unless one is programmed in). Therefore, accomplishing a programmed mission would supersede any obligations to human safety. Even with a moral protocol, there is nothing to say that it could not be hacked to circumvent its programming (as in if the AI has a program which prevents it from harming humans; however, a patch could be installed which makes exceptions for terrorists, or other bad actors. At what point does a super-intelligent AI determine to make its own patches to determine who is a bad actor?)
We have already seen that a machine learning by interacting with society can have serious negative outcomes. Microsoft’s chat-bot experiment is a good example of that (link to story here). The chat-bot they named Tay made its debut on Twitter and within 24 hours it turned into a racist jerk. The bad news is that this was not a glitch in their system, this was merely a reflection of society as it is.
Even if strong safeguards are placed on a moral protocol for an AI is put in place, there are still uncertainties on the execution of the safeguards. A super-intelligent AI may be able to learn faster than a human, but that does not mean, (in fact we should assume as a base case) it will reason like a human. So what would seem like good sound safeguards to us – a programming imperative to act in the best interests of the most number of humans – could be severely misinterpreted by the AI to a result that would not be acceptable for us – AI kills all infants with a potential predisposition to violence.
While some people view this type of super-intelligent machine somewhat inevitable, I am not so sure. In the movie, Transcendence, Johnny Depp’s character downloads his brain into a computer. One question that the characters ask the machine is “are you self-aware?” To which their first AI machine has a vague programmed response which the programmers know. This question leads to a very important point. Just because we are able to design machines that can “learn” from vast troves of data, at what point does the program become self-aware or develop a consciousness. None of the literature on artificial intelligence that I have read has done a good job explaining that leap from self-improving algorithm to fully self-conscious intellect. So when and how does that lightbulb go off?
So far AI does what we tell it to do. We tell the program to play chess and it plays chess. We tell it to aggregate medical records data to help physicians make a better diagnosis, it does that. Until I see a convincing argument that the ability to learn can automatically lead to a self-awareness and consciousness and an ability and willingness to make decisions of its own, my fear of artificial intelligence as an existential threat is fairly low. Then again, I am a money manager and writer, and not a computer programmer. I am still merely a highly interested, if somewhat ignorant, observer.
Facebook founder Mark Zukerberg, is currently working on an artificial intelligence to serve as a home and office assistant, “like Jarvis from Iron Man” he describes it. Let’s hope these efforts do not turn into Ultron.
Until next time…..
“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon” – Elon Musk
Disclaimers: All photos are the property of their owners. Cover photo is promotional photo from Terminator Genesys. This blog has no affiliation with any movie company, entertainment company, or with Wikipedia.