I’ve met Eliezer Yudkowsky
a number of times at various conferences. I once joked with him
about his search for “friendly AI”. He has often talked about the
possible rapid emergence of a super-intelligence, and how we will want
to be involved to ensure our survival.
In my mind, if there is a super-intelligence that emerges, and it
chooses to neglect humans or to allow for our extinction, then isn’t
that the “super-intelligent” thing to do? C’mon Eliezer, it’s a
super-intelligence … it only does super-intelligent things! If
it thinks that humans are irrelevant … it’s super-intelligent … we
must be! 😉
Asimov’s Three Laws of Robotics unsafe?.
The Singularity Institute for Artificial Intelligence launched today
its “3 Laws Unsafe” Web site — timed for the July 16 release of the
f… [KurzweilAI.net Accelerating Intelligence News]