Confined to the realm of science fiction for decades, the topic of Artificial Intelligence has been scoffed at or ignored in the public sphere as an issue to be taken seriously. While the whole idea of a being created with the same, and eventually higher, processing power and learning and thinking capabilities of the human brain seems far-fetched, we are much closer than many seem to realize. In fact, there are multiple organizations openly working on either AI or recreating the human brain; a step many believe will lead to an Artificially Intelligent entity. IBM’s Blue Brain Project is working on just this in an attempt to reverse engineer a brain through artificial neural networks. IBM hopes to complete this project by 2018. Another project that I have had the pleasure of working with has garnered much public attention, including an interview with Morgan Freeman, is BINA48, a mind-cloning project with the intent to replicate the mind of a woman named Bina Aspen ran by Hanson Robotics. While these are two of the better-known projects devoted to AI, it is imperative to know that they are just the tip of the iceberg, and the rest is shrouded in mystery. The majority of Artificial Intelligence agencies are called stealth companies due to their massive amount of secrecy. Independent developers and large players in the tech industry alike fund these companies. For example, PayPal CEO Peter Thiel funds three stealth companies devoted to AI, and at the end of 2015 launched OpenAI, which is described as a non-profit research institute for AI and its benefit for humanity.
Now, with so many companies, military programs, private researchers and others across the world working tirelessly to become the first to finally break through the AI barrier, concerns begin to arise. With an AI arms race underway, sloppiness and a lack of forethought come to appear as increasingly real possibilities. While the positives of AI are tempting (the ability to solve world hunger, fix economic crises, show us our importance and role in this universe, etc.), we cannot overlook the possible negatives to come about as well. For example, without proper exposure to context and proper understanding of the earth, its history and its value during the programming and education process, an AI could easily be apathetic towards life. If this is the case and we tell it to solve world hunger, what would the repercussions be if it decided the most efficient course of action would be to roast every cow on earth? Or, say we are in the middle of the programming process and have this being locked in a room, or a computer as a code or whatever the embodiment would look like, in order to keep it under our control. The programmers and designers have painstakingly made sure to keep this being in a controlled room until it either outgrows its creator and rewrites its own programming to escape its confines, or even convinces those in charge to release it to the world. This was done to avoid the previous issue. But what have we done here? We have effectively taught the AI that it is perfectly acceptable for a higher entity to enslave lesser entities so the higher being can fulfill its goal.
I am not saying Artificial Intelligence is intrinsically evil, nor am I saying we should stall all AI projects. What I am saying is that we as a society must remain informed on this topic and not see it as some fantastical idea that is left to Isaac Asimov. While the positives are absolutely goals we should be making progress towards, but we cannot forget that creating a being with its own essence we have never seen before with the capability meet, and exponentially exceed, the cognitive abilities of humans, is intrinsically dangerous. I would propose the necessity of a global committee dedicated to the safe development of AI. As developers, programmers, theorists and philosophers, this is where we must prioritize human progression over personal success