http//futureoflife.org/

There's lots of talk about whether we'll eventually get outsmarted by AI. 

Question Title

* If superintelligence arrives, what would you like to happen to humans?

Max Tegmark's book Life 3.0 explores twelve scenarios for what might happen in coming  millennia if superintelligence is/isn't developed. Please rate how desirable you find each one.

Question Title

* Libertarian utopiaHumans, cyborgs, uploads and superintelligences coexist peacefully thanks to property rights.

Question Title

* Benevolent dictator: Everybody knows that the AI runs society and enforces strict rules, but most people view this as a good thing.

Question Title

* Egalitarian utopia: Humans, cyborgs and uploads coexist peacefully thanks to property abolition and guaranteed income.

Question Title

* Gatekeeper:  A superintelligent AI is created with the goal of interfering as little as necessary to prevent the creation of another superintelligence. As a result, helper robots with slightly subhuman intelligence abound, and human-machine cyborgs exist, but technological progress is forever stymied.

Question Title

* Protector god: Essentially omniscient and omnipotent AI maximizes human happiness by intervening only in ways that preserve our feeling of control of our own destiny and hides well enough that many humans even doubt the AI's existence.

Question Title

* Enslaved god: A superintelligent AI is confined by humans, who use it to produce unimaginable technology and wealth that can be used for good or bad depending on the human controllers.

Question Title

*  Conquerors: AI takes control, decides that humans are a threat/nuisance/waste of resources and gets rid of us by a method that we don't even understand.

Question Title

* Descendants: AIs replace humans, but give us a graceful exit, making us view them as our worthy descendants, much as parents feel happy and proud to have a child who's smarter than them, who learns from them, and then accomplishes what they could only dream of—even if they can't live to see it all.

Question Title

* Zookeeper: An omnipotent AI keeps some humans around, who feel treated like zoo animals and lament their fate.

Question Title

* 1984: Technological progress toward superintelligence is permanently curtailed not by an AI but by a human-led Orwellian surveillance state where certain kinds of AI research are banned.

Question Title

* Reversion: Technological progress toward superintelligence is prevented by reverting to a pre-technological society in the style of the Amish.

Question Title

* Self-destruction: Superintelligence is never created because humanity drives itself extinct by other means (say nuclear and/or biotech mayhem fueled by climate crisis).

Question Title

* What future do you want?

Question Title

* Please feel free to add any other thoughts here that weren't adequately captured by the questions above.

Question Title

* Which of the following recent AI-related books have you read?

Question Title

* Optional: What is your name? (Won't be publicly shared.)

Question Title

* Would you like to subscribe to the monthly newsletter?

Question Title

* What is your email address?

T