Artificial Intelligence, friend or foe?

Glassfan

Mostly harmless
Joined
Sep 17, 2006
Messages
3,956
Location
Kent
I'm reading a couple of books on AI and I'd like to hear some of your opinions. One interesting quote I'll relate;

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." - Eliezer Yudkowsky, Machine Intelligence Research Institute.

I'm reading Our Final Invention: Artificial Intelligence and the End of the Human Era, by James Barrat, and
Superintelligence; Paths, Dangers and Strategies, by Nick Bostrom.

Humans have come to dominate the planet by having a higher intelligence than other life forms. So what happens when an entity emerges with superior intellect to our own?
 
We'll find out when we blunder into contact with aliens. I'm not sure what com-pu-ters have to do with intelligence though. Perhaps you should elaborate on that.
 
Hello 327. A fellow night-owl?

I didn't mention computers above, but yes, "artificial" here means other than organic, i.e., machine intelligence. Certainly the research being done around the world on AI today involves increasingly sophisticated and powerful computers. Supercomputers. Banks and clusters of supercomputers.

It's interesting you mention aliens. I've always suspected that the long term failure of SETI had to do with wasteful, energy-inefficient biological civilizations giving-way to hyper-efficient machine intelligences.

This then addresses the OP - friend or foe? Will AI supplant us? What happens when humans become the second most intelligent species on this planet?
 
The only intelligence in AI is human intelligence. I honestly think it's a complete misnomer. Deep Blue isn't any more intelligent than a calculus. It just computes more and, apparently, faster.
 
Yes, the definition of intelligence is part of the discussion.

artificial intelligence, noun; the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. - The New Oxford American Dictionary.

Research into AI by various entities have varied from improving the computing power of existing systems (chips and code) to whole brain emulation. And while game-playing algorithms (one-trick-ponies) get some headlines, the truth is that basic AI's control Wall Street, airline reservations, robotic factories, google maps (driving), medical diagnostics, speech and face recognition systems, etc., etc. Apparently, the world robotic population now exceeds 10 million and is growing exponentially. This is to say, the technological landscape is now forming for the advent of the singularity.

Multiple entities in this country (Silicon Valley) are working on AI and so is the military. DARPA has and is funding research in this area. China and Europe also.

Back in the 90s when Vernon Vinge wrote his famous essay, Technological Singularity, this vast infrastructure was in it's infancy. Now it's become fully developed and it's only a matter of time before Deep Blue becomes Deep Thought.

And yes, in terms of "misnomer", there are certainly pedantics who debate the name of the thing, and miss the point.
 
'Translation between languages'... that worked out well. Now, you see, that is actually a task that requires intelligence. And computers are very, very bad at it.

Was that ad hoc enough?
 
AI is a tool. That's what it is, for now. But it's a tool that will be owned by a private consortium of people. We live in an electronic world, but with human reaction times.

So, the concern is whether a private consortium will eventually build an AI that's quote-unquote "dangerous" to people, in that it so drastically alters the nature of our economies as to cause widespread suffering.

The AI itself is amoral. It's a tool. But it will be made by people who're either exercising their morality, or failing to engage in due diligence for longterm consequences.

I happen to think there's a real risk. The effect reach of a true AI can very quickly spiral out of control or out of predictability.
 
Humans have come to dominate the planet by having a higher intelligence than other life forms. So what happens when an entity emerges with superior intellect to our own?

The answer is: It's complicated.
There is little doubt that the laws of physics leave room for beings that are superior to humans. Rewrite the human genetic code and the RNA transcription mechanism et voila, you have a quasi-human that is immune to all virus attacks, because the virus RNA would not be translated anymore. Make a factory that produces androids with human-like capabilities and they will "outbreed" human within a decade. Even if it would turn out that it's basically impossible to beat human intelligence in a fair competition, upgrades to the human are certainly doable. So intelligence is perhaps not the thing to worry about.

Anyways, superhuman intelligence.
It is almost impossible to predict what amount of intelligence is actually good for you. To be intelligent means to be large, and to consume a lot of energy. A smart human outsources most of his important decisions to experts, because its convenient and sometimes because those decisions are beyond his available intellectual ressources. Also, you don't need to be Einstein to understand Einstein. So I would argue that it's not even decidable whether an "optimized" human would be smarter or dumber than a current human. In the future there will likely be intellectual hierachies and ecosystems where the capability to solve a problem better than anyone else gives you an ecological niche.
 
AI is a tool. That's what it is, for now. But it's a tool that will be owned by a private consortium of people. We live in an electronic world, but with human reaction times.

I agree that AI is a tool. It can only do what someone equips it to do. And I believe we are still a long way from a self-modifying AI that can equip itself to do new things. But I disagree that AI is owned by a private consortium of people. There are plenty of algorithms that are public, which anyone can use. What is not public and what usually limits the real-world applicability of AI is the data. Success will not depend on who makes the best AI, but who can accumulate the best data.

I happen to think there's a real risk. The effect reach of a true AI can very quickly spiral out of control or out of predictability.

I would go further and say that the line between a computer program and AI is predictability. If you can predict what it is going to do (without executing the AI code), it is not an AI. So I would say that spiraling out of predictability is a feature, not a bug. It only gets dangerous if it gets out of control. Fortunately, it is very easy to put in safeguards into an AI, so that it will never do a particular action. The full responsibility for the actions of an AI is still in the hand of the human controllers.
 
I'm not quite sure where this unpredictability would come from - except from human input (see Civ 6 and its AI bugs). An AI that would be unpredictable is useless. It's supposed to do what it's programmed to do. If it doesn't, it's malfunctioning or bugged. By the way, a learning AI is already in use by NASA. A self-repairing AI would be even more useful, but we're not quite there yet. None of which affects predictability, however.
 
The unpredictability comes from the fact that it's been programmed to execute too many permutations and combinations for a team of builders to totally understand the consequences once real-world inputs are fed into the AI.

And I'll disagree that the risky AI is not privately owned. Market trading algorithms, for example, operate at speeds that basically makes them autonomous in human timeframes. And those trading algorithms can and will be privately owned. They're designed with a goal in mind - make the owner more money. They're not 'designed' to care about the aggregate. The owners individually assume that 'the system' can handle any combination of trades that they perform. Sure, there are many public pieces of software. But are they robust enough to protect from AI? Why think so?

But, a lot of our data is privately owned. When it comes to our data, we're the product and not the customer. How much of the market value of the various social apps is embedded in the data they have regarding us?
 
I'm reading a couple of books on AI and I'd like to hear some of your opinions. One interesting quote I'll relate;

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." - Eliezer Yudkowsky, Machine Intelligence Research Institute.

I'm reading Our Final Invention: Artificial Intelligence and the End of the Human Era, by James Barrat, and
Superintelligence; Paths, Dangers and Strategies, by Nick Bostrom.

Humans have come to dominate the planet by having a higher intelligence than other life forms. So what happens when an entity emerges with superior intellect to our own?

Personally i doubt we will have non-DNA-tied "AI", that is a being not tied to (working by and large outside our scope and knowledge) DNA parts that give it "intelligence"/intelligence. Basically if it is machine/non dna, it won't be more than an X times more powerful current computer, and current computers have an intelligence of zero.
Remember, machines don't form a context of a state. Much like a stone doesn't form a context so as to go into free-fall; if you pick it up and leave it, it will fall due to natural laws, but won't have an experience of anything going on. Likewise with a computer, it has imposed "laws" (code, and ties to hardware) to run stuff, which is what it does, without any intelligence or experiencing.
 
I'm not quite sure where this unpredictability would come from - except from human input (see Civ 6 and its AI bugs). An AI that would be unpredictable is useless. It's supposed to do what it's programmed to do. If it doesn't, it's malfunctioning or bugged. By the way, a learning AI is already in use by NASA. A self-repairing AI would be even more useful, but we're not quite there yet. None of which affects predictability, however.

The decision tree of a sufficiently advanced AI is opaque. Although the builders can tune the outcome on aggregate, they cannot predict what an AI will do in any particular case. And for any individual decision it is at least impractical to find out why the machine took the decision it took. At my new job, we are letting a software take real-world-impacting decisions based on a machine-learning algorithm (It does not have to be NASA. There are plenty of public algorithms available - if you have the data). There are obvious cases, where one look at the data tells you what the decision should be. But for the cases at the edge, we do not understand why the decision went that way in one case and the other way in the next case.


And I'll disagree that the risky AI is not privately owned. Market trading algorithms, for example, operate at speeds that basically makes them autonomous in human timeframes. And those trading algorithms can and will be privately owned. They're designed with a goal in mind - make the owner more money. They're not 'designed' to care about the aggregate. The owners individually assume that 'the system' can handle any combination of trades that they perform. Sure, there are many public pieces of software. But are they robust enough to protect from AI? Why think so?

I admit that there are privately owned algorithms. But these are usually limited to applications with a clearly defined rule set - essentially games, for which you can generate almost perfect data and have a clear definition of success. High frequency trading is one of these games, which is indeed linked to real risk. However, it is somewhat easy to safeguard it (which I believe is done already), by implementing speed bumps that prevent trading if the results are to extreme. These bring the decisions back to human time frames.

But for most applications, what is really missing is not the algorithm, but the data. If you wanted to have a system that cares about the aggregate, what would you use as success indicator? The problem is that most of the indicators are much to broad and depend on a lot of parameters outside your model, such that your result will not work at all. Even if you find a way to improve the indicators you feed into the algorithm, you might just improve these indicators and not the aggregate wealth. The discrepancy might be immediately obvious in another parameter, which your AI does not care about, because it was not told to.

The danger of AI is not so much in the algorithm, but in the data. If you feed a learning AI a dataset that discriminates based on certain characteristics of people, the AI will learn to discriminate as well. And unless humans, who are able to self-reflect or just die after some time, the AI might keep doing that forever. This might result in a self-fulfilling prophecy, because people from a certain ethnic or cultural background are not given a chance, because the data clearly shows that they will not succeed and they do not succeed because they are not given a chance. Even if you are aware of that problem, it is quite hard to prevent leakage of such information to the algorithm, because AI algorithms are very good at identifying such information by proxies, like names or locations that people of a certain background share.

But, a lot of our data is privately owned. When it comes to our data, we're the product and not the customer. How much of the market value of the various social apps is embedded in the data they have regarding us?

I would say: almost all of it.
 
Here's a short recent article;

How worried should we be about artificial intelligence? I asked 17 experts.

"The transition to machine superintelligence is a very grave matter, and we should take seriously the possibility that things could go radically wrong. This should motivate having some top talent in mathematics and computer science research the problems of AI safety and AI control."Nick Bostrom, director of the Future of Humanity Institute, Oxford University

One issue of note. Once the singularity occurs, and serious human-level AI (and beyond) is achieved, won't the ACLU (Amnesty International, FIDH, Anti-Slavery International, etc.) sue for emancipation? Doesn't holding an intelligent being in bondage or control constitute slavery - outlawed by the 13th Amendment in the US as well as the UN's Universal Declaration of Human Rights?
 
Inevitable, really;

Real-life Robocops will soon replace human police


The first robot police officer will be on patrol in the wealthy United Arab Emirates city by May this year, Dubai Police have confirmed.

Members of the public will be able to report crimes to the multilingual police robot using a touchscreen on its chest.

 
I wonder when it will be vandalized by an actual criminal.

The decision tree of a sufficiently advanced AI is opaque. Although the builders can tune the outcome on aggregate, they cannot predict what an AI will do in any particular case. And for any individual decision it is at least impractical to find out why the machine took the decision it took. At my new job, we are letting a software take real-world-impacting decisions based on a machine-learning algorithm (It does not have to be NASA. There are plenty of public algorithms available - if you have the data). There are obvious cases, where one look at the data tells you what the decision should be. But for the cases at the edge, we do not understand why the decision went that way in one case and the other way in the next case.

That's very informative. But, that you cannot tell why an AI takes a particular decision, doesn't imply it's unpredictable. Predictability is about probabilities, not individual decisions. In short, it's about a pattern, not individual cases. I'm sure that AI can use highly complex algorithms. That is, however, a simulation of how intelligence (or decision making) works. The point is, an AI cannot exceed its parameters. Now, you might argue, neither can we. But then we've entered the realm of philosophy, since everything has its parameters. Psychologically, however, decision making doesn't really take place ' in the mind'. When we think we are making a decision, the decision has actually already been made. It would be interesting to see how that can be simulated.
 
That's very informative. But, that you cannot tell why an AI takes a particular decision, doesn't imply it's unpredictable. Predictability is about probabilities, not individual decisions. In short, it's about a pattern, not individual cases. I'm sure that AI can use highly complex algorithms. That is, however, a simulation of how intelligence (or decision making) works. The point is, an AI cannot exceed its parameters. Now, you might argue, neither can we. But then we've entered the realm of philosophy, since everything has its parameters. Psychologically, however, decision making doesn't really take place ' in the mind'. When we think we are making a decision, the decision has actually already been made. It would be interesting to see how that can be simulated.

For most of the cases you should know the probabilities, yes. But the problems come with the rare cases, where you have trouble estimating the probability, because your data is not sufficient. There will always be cases, where you have no idea what your AI will do, because that situation was no covered by your training data. You can put limits on that and for cases with mostly financial impact this is sufficient, because you can calculate whether the decisions will make you money or not. For example, if you know that the AI makes errors in less than 10% of the cases, but these errors do not cost as much as the benefits from using the AI, you would still use it. But for values which are much harder to quantify this is much more problematic: What error rate would you accept for an AI driving your car, where an error could potentially kill you?

An interesting observation is that people tend to hold a machine to much higher standard than humans. We accept errors from human decisions, but we are much more cautious of machines making the same errors.
 
Now, I am (in general) much more worried about a hypothetical future AI than any modern AI. But the damage they can cause should not be under-estimated.

Consider: a sorting algorithm that 'brought forward' news articles and posts that were predicted to interest people ended up with an amazing intellectual divide across a very angry populace. People had fundamentally different information, information that was intended to incense, and it certainly did.
 
Now, I am (in general) much more worried about a hypothetical future AI than any modern AI. But the damage they can cause should not be under-estimated.

I tend to be more pragmatic, because we have enough to worry about right now. It his kind of hard to predict how far AI will go. We do not even have much of an idea how far the hardware it runs on can go.

Consider: a sorting algorithm that 'brought forward' news articles and posts that were predicted to interest people ended up with an amazing intellectual divide across a very angry populace. People had fundamentally different information, information that was intended to incense, and it certainly did.

In my opinion the effect of the sorting algorithm is overrated. If the machine is not doing it for them, people will just manually filter and select media that does not stress their cognitive dissonance too much. We have been self-organizing in filter bubbles all the time, it is just now that we are actually able to have a glimpse inside other bubbles and recognize the gap.

What has changed is the amount of effort that has to be made to reach a wide audience. Previously, you had to make investments to do that and as a result you would want a return on these investments and would think about how to use the opportunity. With the ease you can reach a large part of the world has come a lack of responsibility of what is written. The printing press has brought an upheaval of power structures, the internet is doing the same.
 
For most of the cases you should know the probabilities, yes. But the problems come with the rare cases, where you have trouble estimating the probability, because your data is not sufficient. There will always be cases, where you have no idea what your AI will do, because that situation was no covered by your training data. You can put limits on that and for cases with mostly financial impact this is sufficient, because you can calculate whether the decisions will make you money or not. For example, if you know that the AI makes errors in less than 10% of the cases, but these errors do not cost as much as the benefits from using the AI, you would still use it. But for values which are much harder to quantify this is much more problematic: What error rate would you accept for an AI driving your car, where an error could potentially kill you?

An interesting observation is that people tend to hold a machine to much higher standard than humans. We accept errors from human decisions, but we are much more cautious of machines making the same errors.

That might have something to do with this peculiar notion of ' artificial intelligence'. I'm getting the impression that machine errors are indeed judged differently than humans. Which is somewhat odd, since who made the machines? But we're talking parameters here, not philosophy. Driving a car can already potentially kill you (or someone else), but it's not the same as flying a commercial plane (which is done 95 % by computer). The difference, of course, is that the car is usually driven individually. And we, being humans, like to take responsibility. as well we should.

But I'm not sure about your argument here as relates to AI. It's not about individual predictability, but overall predictability. In many cases it's simply more convenient to have a machine do what a human could (or used to do). I think that's, basically, the whole argument. Not much of intelligence involved there, just a matter of economics.

The printing press has brought an upheaval of power structures, the internet is doing the same.

That seems a bit of an overstatement - in both cases. But I'd agree an extra layer was added. Much of what the internet does now was already covered by the yellow press.
 
Top Bottom