Sleep Better At Night Knowing Skynet Is Being Studied

Gear Diary is reader-supported. When you buy through links posted on our site, we may earn a commission at no cost to you.

You can learn more by clicking here.

Any good geek knows that “Terminator” isn’t just a great sci-fi story, but a cautionary tale about computers run amok with too much intelligence. It’s all fun and games until Twitter and Tumblr get ahold of the nuclear codes. Instead of just letting this fear live in the “what if” realm, researchers at the University of Cambridge have formed the “Cambridge Project for Existential Risk”, which is analyzing how to prevent a robot uprising.

From their website:

Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake. (For a brief introduction to the issues in the case of AI, with links to further reading, see this recent online article by Huw Price and Jaan Tallinn.)

The Cambridge Project for Existential Risk — a joint initiative between a philosopher, a scientist, and a software entrepreneur — begins with the conviction that these issues require a great deal more scientific investigation than they presently receive. Our aim is to establish within the University of Cambridge a multidisciplinary research centre dedicated to the study and mitigation of risks of this kind. We are convinced that there is nowhere on the planet better suited to house such a centre. Our goal is to steer a small fraction of Cambridge’s great intellectual resources, and of the reputation built on its past and present scientific pre-eminence, to the task of ensuring that our own species has a long-term future. (In the process, we hope to make it a little more certain that we humans will be around to celebrate the University’s own millennium, now less than two centuries hence.)

This sounds amazing for a few reasons. One, it will help build a code of ethics for artificial intelligence. Even if you don’t think there is a serious risk of robots nuking us tomorrow, there are plenty of other knotty issues, and they can’t all be boiled down into Isaac Asimov’s three rules. Two, this is why we have philosophers; at the intersection of science, ethics, computers, and the modern world, you want a philosopher there to mediate the discussion. Philosophy is all about understanding the right actions, and where we fit in the world, and it is incredibly important in deciding issues of artificial intelligence. Three, where was this project when I was majoring in philosophy???

If this subject has caught your attention as much as it has caught mine, check out the Centre for the Study of Existential Risk. And be sure to read their article at The Conversation, which lays out the main debates of artificial intelligence and offers resources for further reading. I know what I will be browsing this weekend!

Via The BBC

 

As an Amazon Associate, we earn from qualifying purchases. Thanks for your support!


About the Author

Zek
Zek has been a gadget fiend for a long time, going back to their first PDA (a Palm M100). They quickly went from researching what PDA to buy to following tech news closely and keeping up with the latest and greatest stuff. They love writing about ebooks because they combine their two favorite activities; reading anything and everything, and talking about fun new tech toys. What could be better?