If Anyone Builds It, Everyone Dies
Why superhuman AI would kill us all — an interactive companion to the book by Eliezer Yudkowsky & Nate Soares
The book in three parts
Nonhuman Minds
What is intelligence? How are modern AIs produced? Can they have wants — and if so, what would they want? Why would we lose?
Chapters 1–6→One Extinction Scenario
A detailed fictional scenario of how an artificial superintelligence might end a world much like our own.
Chapters 7–9 + Coda→Facing the Challenge
The difficulty of alignment, the failures of the AI industry, and what humanity could still do to survive.
Chapters 10–14→Explore the arguments
Chat with the book
Ask questions, challenge arguments, explore ideas — an AI grounded in the book's content.
Explore →Book summary
All the key arguments organized by chapter — scannable, expandable, thorough.
Explore →Podcast episodes
Two hosts discuss the book — from a 30-second teaser to a 2-hour deep dive.
Explore →Counterarguments
The best challenges to the book's thesis — voted on by the community, with AI responses.
Explore →The default outcome is lethal
But the situation is not hopeless. Machine superintelligence doesn’t exist yet, and its creation can still be prevented. Start by understanding the arguments.
Start a conversation