Recently, the physicist Stephen Hawking was quoted in a plethora of popular media sources. His comments were in response to questions from the audience during his first lecture as part of the 2016 Reith Lectures organized by the BBC. The questions surrounded the possibility of humanity causing its own destruction. This is an interesting lead-in to the second part of my series on how science is trying to save humanity.
Hawking has voiced concerns about various technologies humanity has developed or may develop in the future which could cause our destruction. Included in the list is artificial intelligence, genetically engineered biological weapons, global warming and nuclear weapons.
Between the present day, up to 100 or 1000 years from now is the period in which most scientists agree humanity is at the greatest risk of self-destruction. All of the risks above are extremely valid concerns and it is important for humanity as a whole to be mindful of the dangers these things pose.
The scientific and engineering community must be able to critically analyze the possible risks associated with any future technologies in development. They must also think of the long term implications of these emerging technologies. This process of risk management and mitigation is a difficult prospect. A pertinent example would be the Manhattan project.
During World War II, the race to develop the atomic bomb was initially sparked by the belief that the Nazi’s were in the process of developing one for themselves. This was something that could have quickly turned the tide of the war effort against the Allied Forces. The Nazi’s abandoned the project after a short while, declaring the study of this weapon “a Jewish science.” Nonetheless, the project moved onward as a way to end the war.
A key figure in the project was J. Robert Oppenheimer. Oppenheimer joined the project shortly after it began, based upon an order by Franklin D. Roosevelt. He was instrumental in the effort to create the weapon and stayed with it until its completion. It is widely accepted that he strongly supported the development of the atomic bomb up until the point he saw images of the damage and human casualties caused by the bombs dropped on Japan.
After the project ended, he worked on a nuclear advisory committee for the United States government. He was stripped of his position and accused of being a communist after arguing against nuclear proliferation. He argued against proliferation because of (warranted) fears of sparking a nuclear arms race with the Soviet Union, the increased strength of hydrogen bombs, and the potential human cost involved in a nuclear war.
Oppenheimer, in some sense, felt guilty for not foreseeing the long-term implications of developing a nuclear bomb. The cold war, at its most tense moments, was very close to sparking an all-out nuclear exchange, which could have caused the annihilation of humanity. Though the threat of nuclear war still exists, it is not as big of a threat now as it was in the past.
The most pertinent threats to humanity’s survival are the emerging technologies I mentioned earlier as well as global warming. Artificial intelligence is a very credible threat to our survival. The issue with artificial intelligence is inherent in the idea itself. How do you know when you have created a true form of artificial intelligence rather than a computer with good programming? The danger is when you create something that you don’t realize is true artificial intelligence. It is very possible that an artificially intelligent form of life could quickly outpace the intelligence of even the smartest human being and cause our own destruction.
Genetically altered viruses have the possibility of quickly spreading across the globe with a disease which kills more quickly than any “damage control” medical procedures can compensate for. Though bans on their use and development exist, not all countries have signed these treaties and they could be easily developed in secret.
Global warming is another pertinent threat, which is a result of our long-term impact on the planet rather than an emerging technology. Though alternative energy sources are beginning to take hold, some people fear that this is not enough to overcome the damage we have already done to our planet.
Hawking has been vocal about these issues in the past and for good reasoning. As he recommends, the key is to be mindful of the potential dangers that various technologies may pose. These things cannot completely stand in the way of scientific advancement. Some things, like artificial intelligence, could also revolutionize the way we live, with the proper considerations accounted for.
Though there is an inherent danger in advancement, the absence of progression could very well be a death sentence as well. We must be mindful of the dangers while still constantly striving to better our world and advance our technology. While complacency may kill, the dangers of stagnation are ever present, as well. Though the coming years will undoubtedly be trying for the human race, the conclusion is that our survival is very possible. Though, it is not assured.