Science fiction films stretching back from "2001: A Space Odyssey" and "Blade Runner" to more recent entries like "Her" and this year’s "Ex Machina," all take their turns creating a future portrait of artificial intelligence that wrangles the hopes, anxieties and biases of every era. Portrayals range from the psychotic Hal 9000 to "Ex Machina’s" Ava bent on escaping her grim observatory prison, to "Her’s" headless voice falling in love with the movie’s lead.
Despite a plethora of depictions in film, the actual experts can’t seem to agree on the future nature of artificial intelligence. The debate centers on whether an intelligent being could pose a threat to mankind, given that the emergence of this technology is no longer reserved to abstract science fiction.
Very intelligent people have argued fervently on either side. Stephen Hawking’s stance has sparked concern about the advent of A.I. He’s quoted as saying, “The development of full artificial intelligence could spell the end of the human race.”
Physicist Max Tegmark falls somewhere in the middle, having said, “On the one hand, it could potentially solve most of our problems, even mortality. On the other hand, it could destroy life as we know it and everything we care about.”
Others reproach the idea that A.I. poses a serious threat. Among them are astrophysicist Neil DeGrasse Tyson who believes we shouldn’t worry unless A.I. are given emotions.
An article from Entrepreneur attempts to lay out the reasons why A.I. is nothing to fear. Author Tim Oates claims there are four intelligence criteria, that if met, would warrant concern, but are improbable. For A.I. to endanger humanity, they would need to be capable of self-distinction, desire something incompatible with our existence, and have a plan that resulted in our deaths and the power to enact that plan.
However, Oates assessment requires that potential A.I. harbor malicious intent towards people. Realistically, all an intelligent robot would need to become out of our control is competence, and for its needs to conflict with our own.
If like humans, intelligent robots continue to improve their intelligent capacities, they will surely surpass our own. In his book Superintelligence, Nick Bostrom argues that the projected risks of A.I. gone wrong merit enough to pause and think about the potential consequences. He writes,
It is an open question whether the consequences would be for the better or the worse.The potential upside is clearly enormous; but the downside includes existential risk. Humanityʹs future might one day depend on the initial conditions we create, in particular on whether we successfully design the system (e.g., the seed AIʹs goal architecture) in such a way as to make it ʺhuman friendlyʺ — in the best possible interpretation of that term.
Although the topic of A.I. seems so remote, and so crazy to discuss in normal conversation, it might not be as far away as we think. Discussion of A.I. is now relevant science and there may even be a possibility that humanity depends on it.



man running in forest
Photo by




