An important milestone has been reached in the world or robotics recently with a small, adorable robot passing a modified version of a classic logic puzzle called the wise men puzzle. The difference between the classic puzzle and this version is that the robots had been given taps on the head, called 'dumbing pills,' that silenced two of the robots while leaving one capable of speech. The robots were then asked which of them had been given the 'dumbing pills'. At first there was only silence as the robots thought about it, then one of them stood up, said it doesn't know which had been given the 'dumbing pill' and then immediately raised its hand to say it had figured out that it hadn't been given the pill. It was even polite enough to apologize for being in error with its first statement. You can view the video here:
There have been a lot of sensationalist pieces of people, perhaps jokingly, claiming that this is the beginning of the "robot apocalypse," which overshadows the achievement this test represents. For the first time a robot has been self aware enough to recognize its own voice and respond in a proper manner to it. There is a rather large 'however' that must be considered when looking at this; it is just the first step of many to creating robots with human-like self-awareness. Our concept of self-awareness is limited to one form, human, which we don't fully understand yet. So to think that we've created a fully conscious robot is glaringly incorrect as we have no test capable of conclusively proving that something has achieved consciousness due to the fact that we don't know what consciousness exactly is.
Now, let's create a hypothetical situation here. Let's say that the cute little robot in the video above actually gained sentience through this test instead of just proving we can make conditionally self-aware robots. That it had gained sentience and was the nightmare of robot apocalypse enthusiasts worldwide. I feel very confident in saying that we have absolutely nothing to worry about in this situation.
When a human is born, do we worry about it becoming a source of 'evil' and destroying the world? No, because we know we can raise the child and instill in it morals we feel are correct. Whether the child accepts these morals is a different conversation. Now, is there any conceivable reason that a robot, who's mind is created in the likeness of a human, is any different? When we base the consciousness of an AI (artificial intelligence) on the human model, we should expect it to follow the typical behavior of a developing human. It will start off at a child-like intelligence, malleable and absorbing information, so it can have morals instilled in it just as any human child can. Just as a human can be raised not to destroy the world, an AI can be raised in a similar way.
Of course, there are humans out there who fall through the netting and don't develop the proper moral system to function in society. This problem can actually be solved in quite the simple manner, and it also prevents another doomsday scenario from happening. You simply cut it off from the outside world. Let's say the AI fails the morality tests we create for it. If we never hooked it up to the internet in the first place, we will have nothing to worry about. It will be stuck on the network it was created on. The worst case scenario, as long as there's no internet connection in the system, would be that it's free roaming on a local intranet that needs to be shut down and thoroughly examined.
When you hear science celebrities such as Elon Musk or Stephen Hawking warning about the dangers of A,I I think it needs to be taken with a grain of salt. First off, they are not experts in the field of artificial intelligence. Both men, while being incredibly intelligent, are, first and foremost, physicists. Stephen Hawking is a leading expert in theoretical physics, while Elon Musk has a degree in applied physics that he uses for projects like SpaceX and Tesla Motors. Both men warn of the "dangers" of AI. However, looking at an industry expert, Ray Kurzweil, we see somebody who isn't afraid of AI but looks forward to the emergence of them. He describes a world where there is an eventual revolution for robot rights and the rights of AI. Which I find to be a very interesting concept, not to minimize any of the civil rights battles being fought currently.
So to conclude my first article, don't let journalists tell you that the end of the world is one step closer due to this cute little robot recognizing its own voice. We have plenty of things to worry about from environmental dangers, rampant nationalism, and dangers from space, such as asteroids. As outlined above, I believe that we have the capability of raising an AI, like one would a child, and using it to potentially saving the human race.










man running in forestPhoto by 










