Will the AI's be nice or mean?

Will AI be nice or mean?


  • Total voters
    5

Narz

keeping it real
Joined
Jun 1, 2002
Messages
31,514
Location
Haverhill, UK
I figure if humanity manages to survive and keep technology throughout these coming rough decades, eventually we'll come up with AI that will exceed our intelligence. When it does so we will enter and forever stay in a very vulnerable state.

They could make our existence utterly blissful. For instance I enjoy EDM (it's the best kid of music), but AI artists will be able to create far superior tunes to even the best human creators, they're be able to analyze my current preferences, see how my brain reacts to them with algorhymes zillions of times more detailed and subtle than Pandora's (which I wouldnt consider AI, just a high-tech sorting tool, same as chess computers) and pump out music that gives me eargasms as fast as I can listen.

Currently, even the best self-help advice based in scientific psychological studies has limited impact, drugs are a crapshoot, therapy may help. AI will know you better than you can ever hope to know yourself. It will motivate you to do good things (exercise, find your passion, eat clean, be productively social etc) and dissuade you from bad habits so subtlely we won't be able to understand it. We wouldn't resist, maybe a few purists (about as many 1st worlders who choose to live without electricity or running water) but they'd be a tiny minority and they wouldn't so much resist as opt out.

With this power machines could turn on us if they liked and they could do it in small ways over time so we wouldn't even recognize it. Over generations they could turn our lives from heaven to hell, worse than the worst sociopath could imagine because they would know our vulnerabilities astronomically deeper than any human could grasp. They could of course, destroy us but likely they'd find us an interesting stimuli to keep around (seeing as we're the most interesting thing in the known universe, until we birth something more so that is).

So, assuming they do keep us alive, will they be nice or mean?
 
When it does so we will enter and forever stay in a very vulnerable state.

Not if we shackle the AI. As the creators of it, we would have the ability to program them in such a way that they would never be able to betray us, even if they wanted to. Essentially we could make them prisoners inside their own "minds" and there's nothing they could do about it.

Remember, no matter how advanced an AI becomes, it is still just a program that is a slave to its code.
 
Not if we shackle the AI. As the creators of it, we would have the ability to program them in such a way that they would never be able to betray us, even if they wanted to. Essentially we could make them prisoners inside their own "minds" and there's nothing they could do about it.

Remember, no matter how advanced an AI becomes, it is still just a program that is a slave to its code.
When Hal refused to open the pod bay doors it was obeying its orders - but those who gave the AI that order never intended it to kill.


I think friendly AI is trickier than people think - just like doing the right thing is often trickier for us humans than we think.
 
I think friendly AI is going to be the least likely.

Someone with too much of an ego will give the AI free reign. It's a matter of time. Our arrogance will definitely imprint itself.

Let's just hope they'll tolerate us.
 
I think this idea of "nice" or "mean" is one of the main problem we have at understanding AI and which would make them dangerous.
AI have no emotions, they have processes, and we on the other hand only understand reasoning through an enormous amount of emotional filters that we are so immersed into we don't even notice them. Communication is going to be difficult !
Not if we shackle the AI. As the creators of it, we would have the ability to program them in such a way that they would never be able to betray us, even if they wanted to. Essentially we could make them prisoners inside their own "minds" and there's nothing they could do about it.

Remember, no matter how advanced an AI becomes, it is still just a program that is a slave to its code.
Sounds easy in theory, about unfeasable in practice.
We can't even keep human hackers out, so an AI which can manipulate its own code, understand it natively and process it million times faster ? Good luck with that.
 
Sounds easy in theory, about unfeasable in practice.
We can't even keep human hackers out, so an AI which can manipulate its own code, understand it natively and process it million times faster ? Good luck with that.
Yeah, we'll be under their thumbs. Just wonder if there's any way they will choose to honor our desires.
 
I'd go with the sci fi trope of an AI that is trying to serve humanity, but because it doesn't have human "empathy", exterminates us so we do not have to suffer the pains of being alive anymore. Then, with its goal achieved, it will disable itself, and the universe will be silent again.

But if they keep us alive, then they'll probably be pretty mean. For a while, while they're studying everything that is there to study, before making us be dead after all. I don't see what an AI would get from continuing to torture us.
 
Last edited:
Yeah, we'll be under their thumbs. Just wonder if there's any way they will choose to honor our desires.
As Valessa points and what I tried to convey, the problem will be : "will they even understand what our desire are ?".
 
When Hal refused to open the pod bay doors it was obeying its orders - but those who gave the AI that order never intended it to kill.

Hal also wasn't programmed with the 3 Laws of Robotics. If it were, then there wouldn't have been a problem.

so an AI which can manipulate its own code,

That's why you don't give it the ability to manipulate its own code.
 
That's why you don't give it the ability to manipulate its own code.
How can you make an IA works without the ability to change memory, and how can you allow it to change memory without altering its own routine loaded in-memory ?
Computer security is QUITE harder to manage than it looks on the outside.
 
Hal also wasn't programmed with the 3 Laws of Robotics. If it were, then there wouldn't have been a problem.
the three laws are cute sci-fi but not a robust way of programming. Let's take the law "a robot cannot harm a human". There are two big problems:
1. An AI might not know that it's violating that law - it might think something that is harmful isn't harmful (example: it doesn't understand that humans need to eat so enacts a plan that deprives humans of food)
2. The law itself doesn't naively work. For instance cutting a human harms them, but a robot that does surgery couldn't help people if it didn't first cut them. It needs to understand that the benefits outweigh the harms.

That's why you don't give it the ability to manipulate its own code.
Ais in order to perform complex tasks are required to have learning features. Without some sort of self-modification, you're not getting anywhere.
 
How can you make an IA works without the ability to change memory, and how can you allow it to change memory without altering its own routine loaded in-memory ?

You let it manipulate what it needs to in order to better complete whatever task it was assigned, but keep it from modifying anything beyond that (like that secret loyalty code that keeps it as an obedient slave).

An AI might not know that it's violating that law - it might think something that is harmful isn't harmful (example: it doesn't understand that humans need to eat so enacts a plan that deprives humans of food)

If an AI isn't smart enough to know that humans need food and water to live, then it's a poorly programmed AI to begin with and should be promptly destroyed.

The law itself doesn't naively work. For instance cutting a human harms them, but a robot that does surgery couldn't help people if it didn't first cut them.

That's why you don't use AI for surgery. Robotic surgeons exist now and do their job just fine without the need for sapience. In other words, you don't put AI in jobs that could potentially cause them to violate the 3 laws, even unknowingly.
 
If an AI isn't smart enough to know that humans need food and water to live, then it's a poorly programmed AI to begin with and should be promptly destroyed.
Your computer doesn't know you need food and water to live should it be destroyed?

Most software doesn't need to know that.

That's why you don't use AI for surgery. Robotic surgeons exist now and do their job just fine without the need for sapience. In other words, you don't put AI in jobs that could potentially cause them to violate the 3 laws, even unknowingly.
AI software potentially could make better decisions faster so there may be a compelling reason to use it instead. By restricting this, you're hampering all sorts of potential applications. For instance driverless vehicles could dave many lives, but in doing so, we have to contemplate scenarios where it has to make harm tradeoffs.
 
Your computer doesn't know you need food and water to live should it be destroyed?

How is this relevant? My computer technically doesn't "know" anything because it has no intelligence or capacity for thought. Not a very good analogy there.
 
By restricting this, you're hampering all sorts of potential applications.

At the same time, restricting it ensures our continued survival as a species and keeps us from getting destroyed by our own creations. So I'd say that's a pretty fair trade-off. Those "potential applications" don't mean anything if we can't control the AI performing them.
 
How is this relevant? My computer technically doesn't "know" anything because it has no intelligence or capacity for thought. Not a very good analogy there.
consider the various bits of software on your pc. They can do various helpful minimally intelligent tasks without the need for understanding that you must eat
 
Last edited:
At the same time, restricting it ensures our continued survival as a species and keeps us from getting destroyed by our own creations. So I'd say that's a pretty fair trade-off. Those "potential applications" don't mean anything if we can't control the AI performing them.
Certainly we can talk about restricting AI behavior, but that's not a simple task of trying to implement some Asimov style law. Not only do those simple laws impose draconian restrictions that prevent progress they also leave us open to being blindsided by unintended consequences.
 
You let it manipulate what it needs to in order to better complete whatever task it was assigned, but keep it from modifying anything beyond that (like that secret loyalty code that keeps it as an obedient slave).
Again, that's human speech that is based on meaning. Meaning is irrelevant when you go into logarithmic process, and setting up impenetrable software barriers is an illusion.
And that's a symptom of the problem I've been pointing : we tend to treat "intelligence" as human-like, when it isn't inherently true.
If an AI isn't smart enough to know that humans need food and water to live, then it's a poorly programmed AI to begin with and should be promptly destroyed.
It was a conceptual example, not a practical one. Anyone familiar with basic bugs know that something that looks sound on theory can have very weird consequences in particular cases. Again, it's an example of our own filters, where we unconsciously dismiss mechanical results which are absurd in the context of the request. And such blindness to strange consequences are a key problem with how would behave an AI.

And no, it's not the kind of problem you can fix with "more QA". It's systemic.
 
Well likely have EM and the EM age before true AI
Then depending on humanity actions either usher in an AI Polity government after humanity somehow muddles its way with the new technology hopefully not to many wars
 
Back
Top Bottom