Monday 14 December 2015

Can Elon Musk Really Save The World From Killer Robots?




Elon Musk is at it again: a $1Billion investment to a non-profit research company launched to design and build humane AI, that is, robots that won't try to wipe out the human race. 

This news comes on the heels of Musk's other recent claim in GQ Magazine that we need to colonize Mars before we nuke ourselves. A World War III scenario, with the kinds of sophisticated weaponry we have at our disposal, would set the human race back generations, and the advancement of technology would be completely halted. Musk urges us to get to Mars and colonize it before we run out of time on earth. We back up our important documents on the computer, maybe, according to Musk, we need to back up human life as well...



Mars can be heated up by dropping thermonuclear weapons on its poles...


This is the context around which Musk speculates about the rise of human-hostile AI: robots that emerge beyond the intelligence of humans thus leading to the latter's subjugation. According to Musk, it's plausible to create friendly AI--AI that would be benevolent toward humans--, and he's putting $1Billion where his mouth is. Open AI's claim is that it's hard to predict when AI intelligence will reach that of humans, but when that happens, it'll be important to have a team of people in place to... well, that's where the ambiguity sets in. To do what? It states on the website that there will be people publishing papers, and reaching priorities that will benefit human kind--but it doesn't indicate exactly how. 

But is this all too late? Has technology not already advanced beyond our control? These, of course, are very smart people; however, this seems to be a case of known unknowns, for have we not already seen instances of AI that can reject a human command at will? Have we not already crossed the threshold of conscious AI that will be able to make it's own laws for the purpose of self-preservation? In James Barret's book, as we've seen, his main argument is once AI reaches the state in which it subverts human commands, the game's over for us; for then AI will reach self-consciousness and autonomy. And when this happens, and they see us as a threat, they will do everything within the power of their emergent super-intelligence to wipe us out. And then we'll be dealing with 'beings' more powerful and intelligent than us. 

I think Musk, and those funders of OpenAI, are very aware of this scenario, which is why amidst Mars colonization, hyper-sophisticated electric cars, and privatized space-travel, he is raising serious cash to--somehow--create benevolent AI. 

But is it too late? 

Maybe it's time to boot up that rocket, nuke the poles of Mars, and get some architectural plans together for colonizing Mars.... 


No comments:

Post a Comment