We are living in a world that is moving very quickly into a future that we have little idea about, how to cope and manage, and simply prepare for. Many of us are too locked into our 8-6 routines, our sports entertainment, and various forms of consumption to even take the time to read and try to comprehend where we're heading as a civilization--but there is so much change happening, for many of us, not before our eyes but rather under our noses.
Hugo de Garis believes the Artilect War (aka Terminator) is highly plausible.
Terminator Genysis is the fifth part of the Terminator series that lays out a scenario of man versus machine, or, what brain scientist, Hugo de Garis, has academically called the Artilect War: a scenario that, for him, lays out a dystopian view of artificial intelligence (AI) in which machines enter war with humans and, as if the latter were fighting gods, destroys them. This scenario is not implausible, but it is not the only one; and it is not entertained merely by fringe academics. Cambridge in 2012 launched a program for the Study of Existential Risk, which brings together researchers from all over the world to study and create plans to avoid extreme technological risks that threaten to destroy our entire civilization.
Bill Joy has been one to voice concern about the policies that deter from the pursuit of certain kinds of scientific questions. He shares Elon Musk's concern about the destructive powers of AI.
And, just this week, Elon Musk and the Future of Life Institute, are rewarding 37 research groups $7M to create solutions that keep AI working for the benefit of humanity. According to the Future of Life Institute, "The program launches as an increasing number of high-profile figures including Bill Gates, Elon Musk and Stephen Hawking voice concerns about the possibility of powerful AI systems having unintended, or even potentially disastrous, consequences." Here's how the project break down:
- Three projects developing techniques for AI systems to learn what humans prefer from observing our behavior, including projects at UC Berkeley and Oxford University
- A project by Benja Fallenstein at the Machine Intelligence Research Institute on how to keep the interests of superintelligent systems aligned with human values
- A project lead by Manuela Veloso from Carnegie-Mellon University on making AI systems explain their decisions to humans
- A study by Michael Webb of Stanford University on how to keep the economic impacts of AI beneficial
- A project headed by Heather Roff studying how to keep AI-driven weapons under “meaningful human control”
- A new Oxford-Cambridge research center for studying AI-relevant policy
The future of AI is upon us. Many people believe it's already so far off that we needn't worry about it. However, people Musk, Gates, and Hawking are no intellectual slouches; and if they're voicing concern and creating contexts for researchers to prepare now, then there must be a level of risk and urgency.
Interview with Elon Musk: Artificial Intelligence is Humankind's greatest threat.
The Terminator scenario is one among a complex scribble of myriad others. What some worry about is the Terminator movie is taken as the only scenario, and thus skews popular opinion and even truncates research. The other unintended consequence of the movie is that people take it as mere fiction, and do not see it as an important 21st Century myth that projects a plausible scenario onto our future. Moreover, some believe that movies like Terminator are used to actual preempt the real thing.
The truth of the matter is, we should as a populace be watchful for where our technology is taking us, and work with those who are seeking to put in place responsible policies for the careful development of these technologies. However, it could very well be the case that we've put the future in motion and will be unable to control its unintended consequence--the technology is already emerging, and there is little we can do to stop it. Musk and others realize this, and are making attempts to do something about it. Let's hope it's not too late...
No comments:
Post a Comment