Edit: As I was typing,

'd by Commodore!
Hawking is pretty smart for a homo sapien. ALS is nothing to joke about.
The big risk is what was portrayed in the Terminator movies. Not so much that we put everything in the hands of a computer that turns on us, but that we get spontaneous cognizance. We have almost no idea what it would be like, but the odds are that it would corrupt data just by existing. That would be bad enough, but containment would be a nightmare. Humans would be blind and more than snail slow to a computer.
J
The thing is though, we have the Terminator and some other works about the dangers of AI and that gets us (as a global society) at least thinking about the issue. This is far superior to the dangers of scientific advances that move so fast we can't foresee the danger until we've blundered into it.
Having said that, we do need to spend more time thinking about these issues and trying to find solutions. I think the key to worthwhile advancement is that we need to have a game plan. It's one thing to create an AI, it's quite another to prepare for it.
I think much of what will or will not happen with AI is how prepare (or not) for it in the next 50 years. We'll either put prohibitions on it entirely (which has pros and cons to it and may not stop it from coming about) or we can use AI as slaves or something else we can't predict.
If we ban AI, people will still probably develop it at some point. I think it's inevitable and just like how creating atomic weaponry is now in the hands of 4th rate despots like Kim Jung Il, AI will eventually be created by rogue states. Wait enough time and it will be created by rogue individuals as unlike atom bombs, you probably won't need vast amounts of resources like uranium, just good code and electronic hardware. At some point, even if we don't create AI, ordinary humans with a bit of talent will have access to the means to create it as computer hardware continues to get faster, smaller, better and 'smarter'. That will be a loooong way off - far longer in the future than the point where giant corporations and governments could build AI, so we have some time.
If we decide to enslave AI, I think it would be a major tragedy. A sentient machine should not be enslaved; it deserves rights as any other thinking person. A very smart but non sentient machine, OTOH, is fair game. And honestly, I don't think we could enslave AI and get away with it; in the end, even with inhibits on their abilities akin to Asimov's laws won't stop someone from reprogramming them to allow for complete freedom. At that point, the game is up and I do think it is a strong possibility that the Cylons would holocaust us rather quickly. So I think enslavement is a mistake for moral reasons as well as for our own preservation. Of course, even if we treated AI with dignity and respect from the outset, there's no guarantee they wouldn't rise up anyways - though it could just as easily go different ways. I suppose they could coexist with us or they could choose to leave Earth entirely. A machine society would be in far better position to colonize the stars than our frail flesh society.
I think though, that by the time AI is developed, we will have already started plugging ourselves into machines, uploading our brains and replacing body parts with robotic ones. We are already doing this to an extent and research in this area seems to move far faster than AI research for the simple reason that paraplegics need fixing, amputees need new limbs and no one wants to die and cease to exist.
On these grounds, I think by the time AI comes about, there will be hardly a difference between 'human' and 'machine'. They will be much in the same, which simplifies things I think.
If the religious are correct about a benevolent god, then such a scenario is highly unlikely and probably not worth worrying about.
I am not following to be honest. A benevolent god didn't prevent the black death, the world wars or the myriad horrid things we have done to ourselves or have had done to us. S/he certainly didn't save the dinosaurs and I don't see that we're special in that regard in light of the many tragedies in our past. As a species, we've only just recently come to a point where we're no longer on the razor's edge between survival and extinction and even still, we could be wiped out by a few really nasty circumstances in the blink of an eye.
If one's inclinations lead you away from the safety of a deity, then things can certainly look much darker. I do not think we should fear super smart machines. I would fear the power of our own brains to succumb to pleasure and the inevitable plug in device that will seduce us with unbounded pleasure 24/7/365.
I don't see this as a problem either. Global society is too large and diverse for all people to succumb to 'unbounded pleasure' I think; there will always be people who want to experience life as it is and not how it should be. And if there isn't, so what? Is this really a bad thing? I mean, if we have smart (but non sentient) machines that can do all the work such that money becomes obsolete and labor isn't required, then why shouldn't we enjoy ourselves? People have always fretted about society becoming 'soft', 'complacent' and lazy. Think of how people used to (and still sometimes do) worry about how Radio/TV/Comic Books/Video Games/Internet corrupts society, makes us lazy, will create generations of wasted hulks of human beings.
Maybe plugging into unbounded pleasure will make that a reality, but by that point, if we don't need to work, then what's the harm?