If Anyone Builds It, Everyone Dies

Why superhuman AI would kill us all — an interactive companion to the book by Eliezer Yudkowsky & Nate Soares

The default outcome is lethal

But the situation is not hopeless. Machine superintelligence doesn’t exist yet, and its creation can still be prevented. Start by understanding the arguments.

Start a conversation