What happens when you put brain chips and AI together? A very complex
challenge, and most likely a series of unintended consequences.
But this is what we have this week as two famous intellectuals got media coverage, one who brought brain chips, the other who brought the AI dip. What do I mean?
Elon Musk is no stranger to this blog. Behind the posts on afternoon naps and night time soporifics is a thread of preparedness for the future. And Elon Musk is one of those with his finger on the pulse of the future that is rapidly slinging toward our present. Musk is now advocating for the entire human race to be brain chipped. At the Code Conference 2016, he explained the problem with AI advancement leaping far in advance of human intelligence, and thus leading to at least two scenarios. The best-case scenario he calls the house cat: that our intelligence compared to the machine would render us pets to AI beings. The worse case scenario is that we are completely wiped out as a species, if not completely enslaved. His solution? An AI layer that would create a third layer in the brain thus creating symbiosis with machines. “If you think about it,” Musk explains, “you’ve got your limbic system, your cortex, and then a digital layer, a third layer above your cortex, that would work symbiotically with you.” The problem with human intelligence is that we have serious input/output restraints contrasted with AI. “We’re already cyborgs,” Musk explains. “You have a digital version of yourself online with social media and email; and you have basically superpowers in your computer and phone. . . . The constraint is input/output. Your output level is so low—we are reduce in our output to two thumbs tapping on our phone screens.” Merging with digital intelligence, that third AI layer, the “neural lace,” would limit the input/output restraints and thus give human intelligence a chance to grow somewhat requisite with AI intelligence. “It’s easy,” he concludes, “you could insert something basically into your jugular . . .”
Nick Bostrom is the one bringing the AI dip to the party. His claim to fame is his New York Times best-selling book, Superintelligence: Paths, Dangers, Strategies, which has been recommended by Bill Gates and—surprise surprise—Elon Musk. (Incidentally, Musk gave Bostrom’s Future of Humanity Institute £1m.) The Future of Humanity Institute is a research faculty at the University of Oxford (similar, it seems, to the Cambridge Centre for Existential Risk) “established a decade ago to ask the very biggest questions on our behalf. Notably,” according to the Guardian, “what exactly are the ‘existential risks’ that threaten the future of our species; how do we measure them; and what can we do to prevent them?”
Bistrom’s book, as the title suggests, lays out what he believes is the greatest threat of all—again quoting the Guardian—“the possibly imminent creation of a general machine intelligence greater than our own.” The problem with this scenario is that it includes, now quoting Bostrom, an “intelligence explosion” that will take place when AI orders of magnitude more intelligent than us begin to design machines of their own. Bostrom explains: “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb . . . . We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”
Bostrom’s book coincided with the open letter signed by more than 1000 top scientists and presented at last year’s UN Conference on Artificial Intelligence.
So what’s Bostrom’s solution? Cryogenics. His other solution is called ‘maxi pox: based on the idea that the “objective of reducing existential risks should be a dominant consideration whenever we act out of an impersonal concern for humankind as a whole. Maxipox means “maximizing the probability of an OK outcome, where an OK outcome is any that avoids existential catastrophe.”
In essence, getting back to Musk, becoming AI house cats . . .
No comments:
Post a Comment