Will the AI's be nice or mean?

Will AI be nice or mean?


  • Total voters
    5
They're not going to be human at all, so I think notions of "nice" or "not nice" can be thrown out the window. These things are going to be logical.. some might appear to be nice, some might appear to not be so nice, but in the end they will be operating under premises that aren't human at all.
 
It's a much larger problem than people realize. If someone wants to reserve this conversation for 'sapient' AI, then they're missing the point.

The AI we're discussing recursively learns, and it has programmed goals. And the people programming the goals aren't smart enough to predict all the epiphenomena.

Program an AI to coordinate police resources based off of crime statistics, and you eventually get an incredibly racist police force.
Program an AI to destabilize currency markets in order to scoop up good deals under arbitrage, and you eventually get these amazing tidal pools of momentum in the market.

Program an AI to bring things forward on your newsfeed that you might be interested in, and suddenly the entire world is arguing over fake news. Angrily.

The learn things in ways you cannot predict. And they're given authority to act and react faster than you can.

There are a few institutes that are rather worried about this. It's worth paying attention to. If I said "in the next few decades, aliens are going to land on Earth and we know very little about their powerlevel or motivations", the response wouldn't be blase.
 
I guess it depends on if AIs can be domesticated like dogs were.

Eventually they're going to be granted legal personhood-type rights, so I don't think so. Maybe at first, but that won't last long.
 
I guess it depends on if AIs can be domesticated like dogs were.

This was examined in GITS in which the AI of military machines were kept at Children level (or at limited intelligence)
Of course humanity didnt take long to break this law and create high level AIs
 
Keep in mind, it doesn't matter how many weak AIs you make. It's the strong AI that pops out that's the one that changes everything.
 
Great article about the future evolution and the dangers of AI:

https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

If you don't want to read it all then scroll-down to the robotica story in part 2. It's very similar to the paperclip scenario.

I don't agree with all of the assumptions the author makes, but the main point is that superhuman intelligence doesn't imply self-awareness and that self-awareness isn't at all where the risk is. Neither do morality and malovelence have got anything to do with it. That's for the movies.
 
It's a much larger problem than people realize. If someone wants to reserve this conversation for 'sapient' AI, then they're missing the point.

The AI we're discussing recursively learns, and it has programmed goals. And the people programming the goals aren't smart enough to predict all the epiphenomena.

Program an AI to coordinate police resources based off of crime statistics, and you eventually get an incredibly racist police force.
Program an AI to destabilize currency markets in order to scoop up good deals under arbitrage, and you eventually get these amazing tidal pools of momentum in the market.

Program an AI to bring things forward on your newsfeed that you might be interested in, and suddenly the entire world is arguing over fake news. Angrily.

The learn things in ways you cannot predict. And they're given authority to act and react faster than you can.

There are a few institutes that are rather worried about this. It's worth paying attention to. If I said "in the next few decades, aliens are going to land on Earth and we know very little about their powerlevel or motivations", the response wouldn't be blase.

That's my view of it, we won't have a true sentient AI that's like out for self preservation and determines all humans are a threat like terminator or matrix.

What we will have is an AI that does something unintended that is a very small glitch but somehow leads to catastrophe. And then we'll learn from it and fix it and move on.
 
I will succinctly echo what many already said: As always, the good and bad is possible. But the good is much harder to achieve. It is much easier to be careless / to destroy than it is to please / create / maintain, for the world is complex and full of nuance and chaos. So it has a natural bias for the destructive. And truly advanced AI will be complicated enough as it is.
So much for the principle issue.

There is also a perhaps much larger issue. The problem that from the current looks of it we probably will not be able to directly code real AI and hence we also won't be able to directly know how it will behave.
Instead, the road to success looks like finding better and better ways to simulate evolution. Cutting-edge-AI research these days is all about that, with good reason and already marketed results. And that could eventually lead to a run-away-process where no human understands what is even going on, just that it works.
AIs we do not fully understand create new better evolution simulators for even better AIs which we even understand less and so on and on.
Sounds foolhardy? Kinda. But it also sounds profitable, efficient, irreplaceable in what it can do. It sounds like money and productivity. Hard to see that being stopped by abstract philosophical concerns.
 
Hal also wasn't programmed with the 3 Laws of Robotics. If it were, then there wouldn't have been a problem.
Ever read the short story "Liar!"? That's one of Asimov's robot stories in which the 3 Laws didn't work as intended.


If an AI isn't smart enough to know that humans need food and water to live, then it's a poorly programmed AI to begin with and should be promptly destroyed.
Good luck with that after you've either starved or died of dehydration.


The OP is describing something that puts me in mind of a cross between Asimov's robot stories and nuDune (in which self-aware machines try to exterminate humanity, when they're not conducting sadistic experiments on them).

AI isn't something I'd be comfortable allowing that much freedom.
 
The good thing about SF, is that it show plenty of stories where IA turned rogue, hence making people at large aware of the potential danger.
The bad thing about SF, is that in order to tell a good story and due to the obviously human-limited perception of the writers, IA always end up losing to humans who somehow manage to outwit them and they never reach the Singularity (basically sidestepping the entire reason why they would be actually dangerous).
 
You're reading the wrong kind then. It's irrelevant in the end, the private sector will yoke them to exploit natural cognitive biases. In the end they will know more about what you really want and what is good for you than you yourself. People will resist this of course, but once those who follow their advice end up ahead on a regular basis it will be hard to opt out of being data-mined.
 
If humans ever built an AI it would be pityful, crippled, like us. Humans tend to project whenever the possibility is given, humans tend to make gods in their own image. All the things plaguing us would necessarily also plague the AI, which is why I think you can never name anything human made 'AI', it will always be a human concept of intelligence that is used as the basis, so it is a AHI at best.

A 'true' AI however is different. The 'true' AI is beyond mere concepts like morality, ethicality, purpose, progress and all the other spooks. Maybe if Max Stirner was around he could help designing one. It would not think like we do, learn like we do, grow like we do, analyze like we do.. It is by definition something completely beyond our limited comprehension. Knowledge and data are different, the organic human brain with all of its nostalgia and forgetting and romantisizing and all those other beautiful things are unique to us.

However, this is not necessarily about 'AI'. How ANY non-mammalian intelligence works, we honestly have close to zero idea. Now that I would really love to know. I hope it'll happen in my lifetime

203417_2.jpg
 
It's a much larger problem than people realize. If someone wants to reserve this conversation for 'sapient' AI, then they're missing the point.

The AI we're discussing recursively learns, and it has programmed goals. And the people programming the goals aren't smart enough to predict all the epiphenomena.

Program an AI to coordinate police resources based off of crime statistics, and you eventually get an incredibly racist police force.
Program an AI to destabilize currency markets in order to scoop up good deals under arbitrage, and you eventually get these amazing tidal pools of momentum in the market.

Program an AI to bring things forward on your newsfeed that you might be interested in, and suddenly the entire world is arguing over fake news. Angrily.

The learn things in ways you cannot predict. And they're given authority to act and react faster than you can.

There are a few institutes that are rather worried about this. It's worth paying attention to. If I said "in the next few decades, aliens are going to land on Earth and we know very little about their powerlevel or motivations", the response wouldn't be blase.

The key word here being "recursive."
 
The problem with a 'nice' AI is that you would have to define 'nice' as a mathematical formula first. You would have to decide how mean the AI can be to a few people in order to improve the lives of many. And all that without the formula having any loopholes in the form of behavior that fulfills all the formal criteria of 'nice' without actually being nice. You can bet, the AI is going to optimize itself into one of these situation if you leave any of these open.

I would say that you should also always keep the kill switch at an accessible location, but I realize that you would soon reach a situation where you cannot use it any more.
 
If humans ever built an AI it would be pityful, crippled, like us. Humans tend to project whenever the possibility is given, humans tend to make gods in their own image. All the things plaguing us would necessarily also plague the AI, which is why I think you can never name anything human made 'AI', it will always be a human concept of intelligence that is used as the basis, so it is a AHI at best.

A 'true' AI however is different. The 'true' AI is beyond mere concepts like morality, ethicality, purpose, progress and all the other spooks. Maybe if Max Stirner was around he could help designing one. It would not think like we do, learn like we do, grow like we do, analyze like we do.. It is by definition something completely beyond our limited comprehension. Knowledge and data are different, the organic human brain with all of its nostalgia and forgetting and romantisizing and all those other beautiful things are unique to us.

However, this is not necessarily about 'AI'. How ANY non-mammalian intelligence works, we honestly have close to zero idea. Now that I would really love to know. I hope it'll happen in my lifetime

203417_2.jpg
Are you suggesting that Gallifreyans aren't mammalian?
 
If humans ever built an AI it would be pityful, crippled, like us. Humans tend to project whenever the possibility is given, humans tend to make gods in their own image.

Chances are we aren't going to have much say in how exactly this intelligence comes out. We're likely just going to have the power to build up the environment in which it evolves in, and that's about it.

So I think when we first get to say hi to a well refined intelligence like that, it's going to be a bit of a mystery to us as to how exactly it ends up growing as an individual and where his/her/its needs & wants are going to go and how long it's going to take until we're all destroyed (or have a new friend)
 
There is already friendly and mean AI in civ. The same would obviously apply in real life. Some of the AIs will be friendly and some will be mean. Some will be cautious, some will be neutral, some will be guarded, some will be pleased. Some will declare war on us and others will form alliances. They will make trade agreements with us and in some instances boycott our goods. They'll be just as unpredictable as we are.
 
Uh oh. If it ends up like Civ AI, watch your back. Since it will most certainly be stabbed.

Sociopathic morons FTL!
 
Back
Top Bottom