I'm not sure if the question is specifically "given strong AI will happen, how likely will bad things immediately happen." This post is mostly about just the "strong AI" premise.How credible do researchers take the threat of a strong AI emerging and immediately doing bad things uncontrolled?
I agree with the others that in the near-term, very few AI people see strong AI as a credible risk. But looking decades into the future, a large majority do think it's a significant risk. However, I think (1) they tend to be very uncertain, even confused, about issues they perceive as decades away. And (2) they're unsure about how much sense it makes to start preparing for strong AI now (as some folks like Elon Musk, the MIRI, and OpenAI claim).
I'm aware of several surveys that try to gauge "expert opinion" on the risk of strong AI, such as this one, which says:
Spoiler :
Abstract: There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that highlevel machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.
Note that it does try to address the nefarious superintelligence part of your question. And here's a good figure from the survey:
Spoiler :

And a figure from a newer survey, which found similar results:
Spoiler shoutout to that confidence interval shading they made with Python :

The light gray lines are showing the results from questions about specific jobs (e.g., truck driver, surgeon)
But it's worth pointing out that the second survey found that if you ask the same question about strong AI with different wordings and framings, you'll get very different results on aggregate, with the timeframes varying by many decades.
Editorializing a bit here, I'm doubtful "AI expert opinion" on things 50 years from now is even very meaningful. I think the typical AI PhD is an expert in several very specialized domains and isn't equipped to make good guesses about where the field will be in 5 years, let alone 50. Not that I am. Just that pretty much no one is. And I'm sure they're fully aware it's hard to take a stab at questions about strong AI. But ~1/3 of them will at least try when asked... but probably with a shrug and some mild confusion or discomfort.
One more point: it seems to me the "top experts" lean bearish on strong AI. At the risk of cherry picking, Yann LeCun, Andrew Ng, and Yoshua Bengio are all quite skeptical that we'll have strong AI before sometime around the end of the century. That being said, LeCun at least is sympathetic towards the idea of preparing for strong AI now.
In short, I think the consensus agrees with John McCarthy, who used to joke "we'll have human-level AI within 5 to 500 years from now."
Last edited: