The AI Thread

How credible do researchers take the threat of a strong AI emerging and immediately doing bad things uncontrolled?
I'm not sure if the question is specifically "given strong AI will happen, how likely will bad things immediately happen." This post is mostly about just the "strong AI" premise.

I agree with the others that in the near-term, very few AI people see strong AI as a credible risk. But looking decades into the future, a large majority do think it's a significant risk. However, I think (1) they tend to be very uncertain, even confused, about issues they perceive as decades away. And (2) they're unsure about how much sense it makes to start preparing for strong AI now (as some folks like Elon Musk, the MIRI, and OpenAI claim).

I'm aware of several surveys that try to gauge "expert opinion" on the risk of strong AI, such as this one, which says:
Spoiler :
Abstract: There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that highlevel machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.

Note that it does try to address the nefarious superintelligence part of your question. And here's a good figure from the survey:
Spoiler :
f4yxE1g.png

And a figure from a newer survey, which found similar results:
Spoiler shoutout to that confidence interval shading they made with Python :
g77ly6x.png

The light gray lines are showing the results from questions about specific jobs (e.g., truck driver, surgeon)

But it's worth pointing out that the second survey found that if you ask the same question about strong AI with different wordings and framings, you'll get very different results on aggregate, with the timeframes varying by many decades.

Editorializing a bit here, I'm doubtful "AI expert opinion" on things 50 years from now is even very meaningful. I think the typical AI PhD is an expert in several very specialized domains and isn't equipped to make good guesses about where the field will be in 5 years, let alone 50. Not that I am. Just that pretty much no one is. And I'm sure they're fully aware it's hard to take a stab at questions about strong AI. But ~1/3 of them will at least try when asked... but probably with a shrug and some mild confusion or discomfort.

One more point: it seems to me the "top experts" lean bearish on strong AI. At the risk of cherry picking, Yann LeCun, Andrew Ng, and Yoshua Bengio are all quite skeptical that we'll have strong AI before sometime around the end of the century. That being said, LeCun at least is sympathetic towards the idea of preparing for strong AI now.

In short, I think the consensus agrees with John McCarthy, who used to joke "we'll have human-level AI within 5 to 500 years from now."
 
Last edited:
AI doesn't have limitations of human intelligence (skull size, energy consumption, communication bandwidth, etc.), we can give it unlimited space and megawatts of power if needed. So the problem is that self-improving AI can theoretically become orders of magnitude smarter than humans. May be we can initially program it to have human-like moral, ethics and emotions, but once it becomes much smarter than us, it will be hard to control. But we are still decades or hundreds years away from that point.
What I am trying to say is that those unlimited processing and memory resources will be directly available to your brain soon enough. So we were talking about how you have to train an algorithm to learn - imagine instead of doing that with code and trial and error, if you could just naturally interface with a machine and tell it to 'figure this specific thing out' and it goes. At that point, you as a person will be functionally indistinguishable from a strong AI. Sure, your brain may not be as fast as an AI supercomputer cluster, but you'll have direct control over those supercomputer clusters so there won't be much difference.

This is all based on my theory that our ability to interface with machines will advance more rapidly than building AI. I could be wrong, but I expect given the natural market for machine-human interface that exists, more resources will go toward solving that problem rather than AI.
 
Change of subject - @Truthy

What do you think of the notion of AI emerging out of the drone industry rather than out of academic efforts?
 
I think it could happen because there is a compelling need and economic case for smarter navigation and hazard avoidance software for drones. They can either build more and more elaborate navigation routines or pursue a more general intelligence that can handle a rapidly changing world as it flies around. Or maybe at some point the navigation routines become so complex that they become general intelligence by emergence?

I imagine that navigation and goal-seeking were primary evolutionary forces behind intelligence for living things and that general intelligence may emerge when the underlying systems become elaborate enough. This may be true even if the individual underlying parts were designed for specific things rather to support general intelligence.
 
Change of subject - @Truthy

What do you think of the notion of AI emerging out of the drone industry rather than out of academic efforts?
I haven't heard that theory before. Do you have a sense of why that would be more likely than "academic efforts"*? Edit: the post above this one

*It's impossible to separate academia and the tech industry, when it comes to AI. So to me the question sounds like academia + tech industry vs DoD + intelligence community
 
I came up with the theory but I assume it is not novel. The basic reasoning is that you can make a ton of money if you develop truly autonomous drones. That economic incentive will push the field forward as companies invest in their own R&D. Academia will be heavily involved, for sure, but a lot of the work will be proprietary and not broadly shared for much longer than if say, the big breakthroughs came directly out of MIT. I also expanded on the concept a bit more in a post right above yours.
 
I came up with the theory but I assume it is not novel. The basic reasoning is that you can make a ton of money if you develop truly autonomous drones. That economic incentive will push the field forward as companies invest in their own R&D. Academia will be heavily involved, for sure, but a lot of the work will be proprietary and not broadly shared for much longer than if say, the big breakthroughs came directly out of MIT. I also expanded on the concept a bit more in a post right above yours.
Except you can't and you won't. The way to truly make loads of money is to make drones which are just dependent enough that you can control them and just autonomous enough that you don't need to do so in routine situations. Anything else is just begging for a lawsuit when a drone oversteps in a non routine situation and generally not attractive to potential customers because of this. And that's before you get all sorts of loons breathing down your neck about the ethical issues of enslaving something sentient. Loons that in this day and age would actually win. *spits*

The perfect AI from a business perspective is not a mechanical man but an Asimov robot. Something easily controlled and constrained and yet flexible enough to replace man in situations when liability permits it. And something most certainly well clear of the dreaded human rights line.
 
Anything else is just begging for a lawsuit when a drone oversteps in a non routine situation and generally not attractive to potential customers because of this.
This is overblown. If this were true you wouldn't see autonomous cars develop. We already have robust liability systems in place. While autonomous systems do create challenges for those systems, this is one area where regulators are at least attempting to keep pace with the market and provide regulatory framework and liability protections to foster development.
 
What I am trying to say is that those unlimited processing and memory resources will be directly available to your brain soon enough. So we were talking about how you have to train an algorithm to learn - imagine instead of doing that with code and trial and error, if you could just naturally interface with a machine and tell it to 'figure this specific thing out' and it goes. At that point, you as a person will be functionally indistinguishable from a strong AI. Sure, your brain may not be as fast as an AI supercomputer cluster, but you'll have direct control over those supercomputer clusters so there won't be much difference.

This is all based on my theory that our ability to interface with machines will advance more rapidly than building AI. I could be wrong, but I expect given the natural market for machine-human interface that exists, more resources will go toward solving that problem rather than AI.

I'm not that sure about, human - computer interface working as AI. The issue of how we communicate does not change. Right now we do programming languages as the agreed language of choice between human and computer . If we develop connection between human brain and computer , you need a common way to communicate. Given different ways each human communicates even in thoughts , we would need an agreed language/protocol for this. If thinking thoughts would be as complex as a programming language it would not be fun or more importantly sufficiently fast.

If we develop a computer complex enough to work , respond intelligently and solve problems according to human thoughts , we already have an AI.

But if we want a matrix movie type memory transfer , knowledge transfer that does not seem to be possible, based on how store human brains store information . Human thought patterns and memory seems to work intertwined . We don't store memory cleanly like in computer .new Information gets stored in memory as well as affects the processing /decision making circuits. I could be wrong though .
 
This is overblown. If this were true you wouldn't see autonomous cars develop. We already have robust liability systems in place. While autonomous systems do create challenges for those systems, this is one area where regulators are at least attempting to keep pace with the market and provide regulatory framework and liability protections to foster development.
The legal problems around autonomous vehicles are not yet resolved at all. And those are just liable to kill people. That's easy. Who do you sue when your factory AI goes on strike demanding human rights?
 
If we develop a computer complex enough to work , respond intelligently and solve problems according to human thoughts , we already have an AI.
I do not think this is necessarily true.

The legal problems around autonomous vehicles are not yet resolved at all.
Right but at least there is proactive movement to build out that legal framework in advance of self-driving cars and not after them. This is a very positive model we should continue following for related AI developments.
 
What kind of AI stuff have you used, if you don't mind sharing?

I think you know about the general background for the stuff, but a short wrap-up for everyone else (although not that important):
AI (or machine learning) is basically divided into 2 parts
- supervised learning, where you tell the machine what you give it, and it can then predict this thing. Means you feed the computer 100 cat pictures, it will be able to decide afterwards if a picture has a cat on it. This includes e.g. the currently very hip deep learning, but also e.g. the not-so-hip random forest (and a ton of other things).
- unsupervised learning, where you don't tell the machine what you give it, and it tells you what it sees in the data. Means you feed it 100 cat pictures, and it will tell you you there are in all the pictures 2 eyes, 1 mouth, 2 ears, etc. This includes various techniques like PCA, t-SNE and all kinds of clustering.

In my work we use clustering (sometimes, if the data is complicated) to detect patterns in gene expression data (e.g. over time X genes go up, Y go down, Z have a wave type pattern, etc).
So this is technically AI, although not what everyone is talking about.

But while I'm at it: The whole differentiation between "weak AI" and "strong AI" is BS made up by politicians. Yes, if the Tesla car can recognize other vehicles in the surrounding, and reacts to these, that is "simple" pattern recognition, which they categorize as "weak". But a terminator like AI supposedly strong, but also does nothing than pattern recognition. Hell, humans don't do anything else. If a soldier in a warzone makes a decision to (not) shoot a person, s/he also detects patterns (old, young, combatant, civilian, wounded, healthy, a threat, unaware), and makes decisions based on this patterns and his background information.
Obviously a lot more choices/facts are involved, but on a basic level, there are really no differences.
 
There is also the reinforcement learning, usually considered a separate branch. Used in algorithms like autonomous flight control and in modern chess (and other board games) engines like Leela and Alpha-Zero.
 
There is also the reinforcement learning, usually considered a separate branch. Used in algorithms like autonomous flight control and in modern chess (and other board games) engines like Leela and Alpha-Zero.
Yes, you'd maybe also include semi-supervised learning, a popular hybrid of supervised and unsupervised
 
There's also semi-supervised clustering and other in-between things, but I didn't want to give a full lecture about this ^^ (also because I only have deeper insights into clustering).

There is also the reinforcement learning, usually considered a separate branch. Used in algorithms like autonomous flight control and in modern chess (and other board games) engines like Leela and Alpha-Zero.

That is at the end also supervised, isn't it? Because initially you also need to feed the algorithm with more or less succesful scenarios, before the reinforcement can start.
 
That is at the end also supervised, isn't it? Because initially you also need to feed the algorithm with more or less succesful scenarios, before the reinforcement can start.
You probably know this, but it is "supervised" in some sense, though it's usually given its own category in ML because the idea of using a "reward signal" from an environment to train a model differs a lot from the idea of just using a labeled training set. But beyond that, is the fact that RL algos are considered "active", because they interact with their environments, creating feedback loops (the reinforcing). The normal idea of "supervised learning", on the other hand, is typically seen as "passive"--it just consumes the provided training data. So I think it makes sense to categorize them differently. However, I think in the context of your answer above (where it appears you were just trying to contrast unsupervised algos with the rest of ML), it made sense to not talk about RL as its own thing.
 
Last edited:
That is at the end also supervised, isn't it? Because initially you also need to feed the algorithm with more or less succesful scenarios, before the reinforcement can start.
Kind of, but you provide rules to the algorithm instead of labels. It's usually considered third major branch of ML.
 
Back
Top Bottom