Steven Hawking, Elon Musk, and a host of others--notably, Steve Wozniak (co-founder of Apple), Google DeepMind CEO, Demis Hassabis, Professor Noam Chomsky, and Google Director of Research, Peter Norvig--have signed a letter suggesting a ban on AI warfare, specifically weapons operated by autonomous AIs. The letter was presented at the International Joint Conference On Artificial Intelligence in Buenos Aires, Argentina. The letter particularly warned against countries engaging in an AI arms race, the result of which would spell the end of the human race.
What is telling is that the signatories maintain that such technologies are years away from deployment, not decades--as the general public assume--as stated in the following excerpt from the letter itself:
Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms."
This should give us tremendous pause, and be something we give keen eyes and ears to the development of. We know that Elon Musk has maintained for some time that AI poses tremendous risk to the human race, while Steven Hawking has claimed that AI could spell the end of humankind.
Where things get mucky is when many thought-leaders in this area push for a higher level of human consciousness and human/machine evolution, called the Singularity. The warnings thus become the platform upon which embedded bio-technologies become acceptable, designed to enhance the human brain so as to stay intellectually on par with AIs. The problem with these technologies is they will foist onto us greater forms of social control--a tenfold greater set of capacities than your iPhone, only embedded in your brain.
So while we should get behind Musk and Hawking and the others to warn against the plausibility of advanced AI military technologies, we should be cautious about fully adopting biotechnologies that will radically call into question what it means to be human while providing no advantage over advanced AI.
These are all part of understanding the risks we are living in. The more vigilant we are, the more prepared we are, and the better we are able to make thoughtful decisions. The world is being designed by itself; our technologies are taking on a life of their own. We must heed such warnings seriously.
No comments:
Post a Comment