“Alexa, I want the truth.”
The AI era is has arrived.
For those who don’t know who Alexa is, she is part of Amazon’s Echo, a hands-free speaker you control with your voice and is getting smarter every day. Some will tell you this marks the onset of an AI [artificial intelligence] led revolution. The demand for engineers and coders has reached an all time high as companies look for new ways to innovate in the realm of artificial intelligence.
In 1965, Gordan Moore introduced a hypothesis that would set the tempo for our modern digital revolution. From careful observation of a developing trend, Moore theorized that computational power would increase dramatically, while simultaneously experiencing a decrease in relative cost, at an exponential rate. In other words, Moore’s law claims that the overall processing power of computers would double every two years. His observation became the golden rule for the tech industry, and a starting point for innovation within the applied sciences. For fifty years, the industry followed his rule, however, the tables have turned in the past five years or so. Now we are starting to witness change in the landscape of the digital environment.
Technological giants such as Amazon, Facebook, Google, Microsoft, etc. are the forerunners of a cyber revolution marked by rapid technological growth. For instance, just last week, Google opened up about its Tenor Processing Unit (or TPU), the server chip they use in-house to perform AI computing workloads more efficiently advancing machine-learning capability by the power of three generations. This is roughly equivalent to fast forwarding technology roughly 7 years into the future or Moore’s law by three generations.
Further, we find ourselves in the age of self-driving cars. Issues during test runs of these vehicles have drawn attention to the need for self-correction mechanisms. In 2015, Google’s driverless car ran into a surprising safety predicament, humans. Human error on the road is tough to deal with when the car has been programmed to be a stickler for the rules. For instance, in 2009, the car was not able to make it through a four-way stop, as its sensors waited for other cars to stop completely before let it go through the intersection. However, cars typically failed to come to a complete stop, immobilizing the Google vehicle. Some have called the car ‘too safe’, a case accentuating the incompatibility between humans and machines.
Thus, these situations introduced an inherent need for self-correction mechanisms. Driverless cars need rely on feedback loops for self-correction, adapting to road conditions and learning the unique behavior of drivers around it. A video released by MIT earlier this year shows how self-correction technology is coming to life. An industrial robot picks up either spools of wire or cans of spray paint and drops the item into the corresponding bucket. The robot pauses temporarily, before accidently making the wrong distinction, merely to self-correct and drop the item in the appropriate container. An observer wearing an EEG cap, who just notices that something is off, triggers the corrections.
The EEG cap measures forty-eight different signals from the human brain, many of which are difficult to interpret and are very noisy. However, one signal, known as ‘error potential’, is relatively easy to detect and emits a strong reaction in the brain when the user notices that something is wrong. The incorporation of error potential is quite different than the usual paradigm used today, i.e. asking the human to program the machine in the machine’s own language. What we see here are programmers getting robots to adhere to human language rather than having humans conform to the robot’s language. A robot is doing some sort of basis task, under human supervision that is able to override the task without having to code for a correction or stop the machine through physical means.
An exciting period it may be but what is hidden behind all the technological buzz is a hefty white elephant that is publicly neglected. When machines are able to respond to us through a feedback loop of self-correction rather than us responding to them, we are giving up our control over them. This is what feeds the fears of artificial intelligence experts Elon Musk, Steven Hawking, and Bill Gates who have voiced concerns about artificial intelligence technology.
In 2015, the three experts signed an open letter on artificial intelligence calling on the research of the societal impacts of AI. While the society can reap great rewards from the exploitation of AI, it must be careful to avoid potential downfalls, such as creating something unmanageable. The letter titled Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter details research priorities for AI and documents its inherent vulnerabilities.
By 2014, physicist Steven Hawking and magnate Elon Musk had publically voiced the opinion that supernormal artificial intelligence may provide innumerable benefits but could elicit the demise of the human race if exercised irresponsibly. Both Hawking and Musk have a seat on the scientific advisory board for the Future of Life Institute, an organization committed to mitigating existential threats facing humanity. The institute prepared a letter to the greater AI Reseach community where it circulated the scientific locality in early 2015 and was soon after released to the public.
The purpose of the letter was to identify the positive and negative impacts of AI research and development. The challenges that arise are separated into verification [“Did I build the system right?”], validation [“Did I build the right system?”], security, and control [“I built the wrong system, can I fix it?”]. The concerns implicated in the advancement of AI technology can also be classified in terms of short-term and long-term.
Short-term concerns are assigned to autonomous vehicles, including civilian drones and driverless cars. For instance, during an emergency, self-driving cars may have to decide between a small chance of a serious accident and a large risk of a small accident. Other concerns relate to lethal intelligent autonomous weapons, and extend to privacy concerns as AI becomes increasingly able to interpret large surveillance data. Another hot topic of discussion is how to best manage the economic impact of jobs displaced by AI.
Long-term concerns echo the word’s of Microsoft’s research director, Eric Horvitz and are may be referred to as the ‘control problem’. It addresses situations that deal with the possibility of the emergence of hazardous superintelligence and the occurrence of an ‘intelligence explosion’.
I know what you’re thinking; that this sounds like something out of a Star Wars movie. That it is something that is just hypotheticals and has as much chance of happening as aliens taking over or the earth crashing into the sun. Sure, it may just be hypotheticals, but let’s break it down a little further to get a better understanding of what all this Sci-Fi sounding nonsense really means.
It all boils down to what is known as the technological singularity. The technological singularity represents the hypothesis that the creation of artificial superintelligence suddenly set off runaway technological growth, resulting in inexplicable changes within human civilization. According to this hypothesis (and Wikipedia), an upgradable technological agent, such as a computer running software (based in artificial general intelligence) would enter a runaway reaction of self improvement cycles, with each new and more intelligent generation appearing more and more rapidly. The outcome is an intelligence explosion and results in a powerful superintelligence that would surpass all human intelligence.
It’s primarily manifested in two ways. The first is superintelligence, which is defined as an hypothetical agent that has an intellect that far surpasses that of the brightest and most gifted human minds in practically every field from scientific creativity, general wisdom and social skills. The second is an intelligence explosion, which is a function of superintelligence. An intelligence explosion is the possible outcome of humanity building artificial general intelligence (AGI), which is capable of recursive self-improvement that leads to the rapid emergence of artificial superintelligence (ASI), the limits of which are unknown. Eventually, the recursion of self-correction cycles would spawn a mechanical intelligence that is better at developing its own internal functions. It, thus, can rewrite or self-modify itself by changing its own coding instructions.
This sort of self-recurring algorithms is grouped under the term ‘machine learning’, which we discussed earlier with Google’s TPU server chip. Since is represents a fast-forward in technology by three generations we can also classify machinery as rapid technological advancement. It is this technology that means the criteria for an intelligence explosion paving the way for inadvertent emanation of artificial superintelligence. We are walking a dangerous road with AI and it is important to understand the risks we are involving.
So, go ahead, if you happen to have an Amazon Echo, tell Alexa you want the truth and see what she has to say. Keep in mind, she’s smarter than she looks and she’s getting smarter everyday.
“Alexa, I want the truth.”
“You can’t handle the truth."