Imagine a world where human-like robots having a coffee together conversing in their own dialect vastly different from what they were originally programmed to say. Though this visual seems both ludicrous and daunting, it is important to remember that there are always two sides to every story.
Artificial intelligence has the root word artifice, which means cunning or clever devices. When we think of the word artificial, the definition becomes slightly different from artifice. While artifice is talking about the technology itself, the artificial provides the context that humans control it. A bit counterintuitive, huh? This is the exact conundrum faced today when discussing the artificial intelligence. Humans control it with by building and programming it, but the piece of machinery eventually starts to control us.
Before I delve into the ethical complications and general benefits surrounding artificial intelligence, we must first define what artificial intelligence is. As described by the European Parliament, the concept of artificial intelligence dates back to the 1950s, where John McCarthy In a conference with other scientists thoughts to make machines intelligent. From that conference, the artificial intelligence or AI was defined as “capability of a computer program to perform tasks or reasoning processes.”
Though you may not readily recognize it, some aspects of artificial intelligence are definitely intertwined into some consumer products such as Siri, self- driving cars, or even auto-correct. At the moment researchers are primarily designing narrow or weak artificial intelligence, which is basically technology that deals with a specific task. The overall goal for researchers is to develop general AI or general artificial intelligence, which extends to developing machines that can outperform humans in every cognitive level. Scared or fascinated yet?
I will start with those of you who are fascinated first. Artificial intelligence has honestly been designed to make our lives easier and efficient. The benefits behind this type of technology have allowed us to use large amounts of data and analyze it in seconds. This brings me to IBMs well-renowned machine: Watson. It has the ability to take vast amounts of information, some even ambiguous, to solve many of the world’s problems. The system contains a database of contracts and disclosures but in a protective and private framework. An article was written in the MIT Technology Review relating to the power Watson has. Paul Tang, a physician himself, was with his during her knee surgery. Afterward, he sought for a final check up with one of the surgeons for the expectations post-surgery, but the answers he had received were very vague. Of course, this is understandable because individuals are physiologically wired differently, but this restriction in human intellect is where AIs come in. Dr. Tang is one of the many people involved with the future of Watson. During his wife’s recovery, Watson relayed information about how long it would take for his wife to walk again without pain because the machine had data of other patients similar to her body type and situation. Furthermore, it can analyze images and tissue samples to evaluate the best treatments for a patient.
Though companies, like IBM, have the best intentions for moving AIs forward, there are some machines that are assuming more of a negative role. According to the Future of Life Institute, there are reports of how AIs can cause mass scale horrors. If programmed to kill, AIs can cause mass casualties if mismanaged or abused. On the other hand, these AI machines may be difficult to shut off once they reach a cognition level higher than humans, so we may not be able to control them either. More recently, Facebook has terminated their latest project involving human-like robots. They had designed robots, which they called chatbots, to have a conversation with each other in English. At first, the creators wanted the robots to negotiate with each other, and gradually the chatbots began to communicate in a language only they understood. The robots were not instructed to necessarily use comprehensible English, so it gave the liberty for these robots to develop their own language. How could they tell if the change was just a malfunction? Apparently, there were some rules in their speech as they were stressing their own names to their negotiation, and some of these were even carried out, so this could not simply be classified as a glitch. This would be an example of unpromising results of AIs if out of human control. Though this situation was fairly small, imagine this happening in a larger more violent situation. In some ways, these robots could potentially take control of the human race.
There are always two sides to every discussion, but there is only one truth: “With great power comes great responsibility.” While Spiderman uses it in a more self-identifying context, there is also an identity issue we have when it comes to artificial intelligence. What measures will we take to achieve efficiency and greater intellect? Why do we have to develop something smarter than ourselves? Are we not smart enough just for developing it? Is artificial intelligence just cover up for what human beings truly idealize: perfection? In the recent rebuttal between Elon Musk and Mark Zuckerberg, both extended their views on the progression of artificial intelligence. Zuckerberg was siding with the vast progression of it and Musk more so focused on the implications behind it. Whether you are on the Zuckerberg side or the Musk side, or if you just do not even care, understand that artificial intelligence is our present and our future. Be aware of its advantages and risks, and try to form your opinions about what direction this technology and vision should take.