This project is aimed at mapping the range of alternative future scenarios from the development of advanced artificial intelligence. The goal is to cover the full spectrum of risks from minimal to existential and highlight many of the structural risks that receive less attention.

Not exactly a survey, the exercise is to rank each “condition” (elements of a possible scenario) on their potential impact on society and plausibility. The results will be used in a general morphological model (GMA) to outline the scenario space, identify relationships between dimensions, and present alternative futures and governance strategies. A second iteration with domain experts will refine the space (and with more details). 

The research is for my MS thesis focused on AI risk and governance. I hope to seek publication and disseminate to the appropriate audiences. 

It shouldn't take more than 10 minutes. All questions are multiple choice from a drop down and will remain completely anonymous. For impact questions that are all positive or negative, or neither, please 
leave neutral or pick the "most" or "least."

Thank you very much for your participation! 
(General Timeframe 2045-2100, but ultimately timeline agnostic)

If you'd like further details on methods, definitions, or purpose see below

Details on measurement, assumptions, definitions, and purpose
How values are measured:

1) Impact. Impact seeks to identify which "condition" of each dimension could have the most positive/negative outcome ("high positive" to no change to "high negative") for civilization. There are normative aspects of "positive" or "negative" impact, so for question clusters that are all negative, positive, or neutral please choose the most or least of the group. 
2) likelihood. For likelihood you can think in terms of "plausibility" more than probability as these are highly uncertain conditions (Very unlikely = 5 - 20%, unlikely = 20-40%, even chance = 40-60%, likely = 60 - 80%, very likely = 80-95%); e.g., given your domain knowledge, do you believe this condition could be "very likely" to occur, "likely",  "even chance", or "very unlikely"?
AI will continue to develop and receive investment, and there will be limited economic disruptions or other global catastrophes. For several questions on capability, race dynamics, developer and location, the assumption is that transformational AI will happen (or has happened).
"Advanced AI" will be defined as equivalent to "transformational AI" or "high-level machine intelligence" (HLMI). I'm defining this as a cluster of capabilities on a spectrum from transformational AI systems (perhaps not at human-level generality) to human-level AGI and superintelligence. 
Note on methods: 
The data will be used in a novel morphological model (general morphological analysis (GMA)), to outline the scenario space and map relationships and influence between each dimension and condition. GMA hasn't been used for AI risk in previous studies, so this should be a valuable addition to the field. 

Question Title

* 17. How familiar are you with AI safety or existential risk?

Question Title

* 18. Do you now or have you worked in AI safety?

Question Title

* 19. How familiar are you with AI governance?

Question Title

* 20. Please leave any comments or suggestions. Thank you!