The AI Thread

Instead, I think there are applications that should be regulated and how heavily would depend on how dangerous the application is. I do not see how an AI for playing Go could hurt anything but human pride. A self-driving car, however, endangers human life and should be regulated, no matter whether it is a fancy AI or a hand-crafted conventional decision tree.

The industry should be regulated (in relation to the AI), not the AI itself. If it’s a question of nuclear energy, mass production of weapons, self-driving cars industries, then there should be oversight. If it’s deep learning software for a public toilet chain, then, obviously, AI application isn’t particularly dangerous and we can sleep in peace.
 
Last edited:
Are there any groups that are currently working on developing strong AI? As in AI that is at least as generally intelligent as a human rather than trained to do a specific task excellently.
 
Are there any groups that are currently working on developing strong AI? As in AI that is at least as generally intelligent as a human rather than trained to do a specific task excellently.

Short answer: no. Long answer: nooo.
 
Certainly more so than most science journalists. Here's the piece he links to, which is also worth reading.
 
Are there any groups that are currently working on developing strong AI? As in AI that is at least as generally intelligent as a human rather than trained to do a specific task excellently.
Sort of. A lot of organizations/industry labs claim they do, but it's usually debatable. Some examples are the Elon-cofounded OpenAI and the Machine Intelligence Research Institute (MIRI). Additionally, Google Brain, Facebook AI Research (FAIR), Microsoft Research (MSR), IBM Research and a lot of others all to some extent or another claim they do "strong AI." Or at least push in that direction. But I think it depends on how you look at it and what you count as "developing strong AI." Currently, there's very little transferability between tasks. Like you say, most ML stuff is extremely specialized to specific tasks. But there are a lot of people working on "multi-task learning", "transfer learning", and so on that tries to remedy the specialization issues. Or even just making specialized systems "smarter" and more like what we think of when we think of "strong AI."

To get a sense of this, maybe it helps to just take a look at something like FAIR's "ParlAI" framework and their "bAbI" project, which try to make dialog systems more logical. For example, you give a bunch of facts and then ask questions and hope your AI properly uses the facts to answer. For example:
Spoiler :

Facts:
John picked up the apple.
The office is north of the bedroom.
John went to the office.
The bedroom is north of the bathroom.
John went to the kitchen.
The kitchen is west of the garden.
John dropped the apple.

Question: What is north of the bedroom?
Answer: office

Question: Where was the apple before the kitchen?
Answer: office

Does this count as FAIR "developing strong AI"? I dunno... but it's impressive.

Finally, there are a lot of research groups at many universities that try to push towards "strong AI." Though the same caveats apply.

On the other hand, I sometimes see companies, usually startups, claiming they work on "strong AI" or "artificial general intelligence," but when you dig deeper, it turns out they don't at all and they were totally full of crap. I was checking out a startup a few days ago that claimed they did strong AI research and I was quite puzzled about what they even did to make money... it turned out they just sell a really hacky, complicated search engine optimization tool to trick Google into ranking pages higher on the search results page.
 
Last edited:
On the other hand, I sometimes see companies, usually startups, claiming they work on "strong AI" or "artificial general intelligence," but when you dig deeper, it turns out they don't at all and they were totally full of crap. I was checking out a startup a few days ago that claimed they did strong AI research and I was quite puzzled about what they even did to make money... it turned out they just sell a really hacky, complicated search engine optimization tool to trick Google into ranking pages higher on the search results page.

Usually the way for these companies to make money is to sell some kind of "AI" to upper-level management of big companies who lack the tools to expose the fraud.
 
Is Locklin on science reputable?
For what it's worth, I thought it was a good blog post and agree with @Mouthwash it's much better than most AI journalism. But disagree with the claims he makes about how there haven’t been any “big conceptual breakthroughs” since the 90s. To me, that’s wrong and I don’t think many ML people would agree either. But it’s an appealing assertion for people with a certain level of cynicism or a “there’s nothing new under the sun” view of ML. And I respect the attitude since it’s a valuable counterweight to all the hype.

I liked his shout-out to community detection algorithms. I think those are a very important development. I’d even call it a “big conceptual breakthrough,” though apparently he’d instead say (a) it’s very important and (b) it’s not a big conceptual breakthrough. Which seems silly and incorrect to me. But whatever.
 
How far off are self driving cars?

Do you all think LIDAR is necessary for self-driving or do you think ~normal cameras and AI software can handle the challenges of the road? I think Elon's betting that if the human eye and brain can get by without LIDAR, then cars should be able to as well. I just do not know how amenable visual data is it to pattern recognition and mapping.relative to LIDAR.

Like if LIDAR costs 1,000x more to implement than video cameras but you get 10,000x more accuracy or require 10,000x less computing power to process the data it produces, the trade may be worth it.
 
How far off are self driving cars?

Do you all think LIDAR is necessary for self-driving or do you think ~normal cameras and AI software can handle the challenges of the road? I think Elon's betting that if the human eye and brain can get by without LIDAR, then cars should be able to as well. I just do not know how amenable visual data is it to pattern recognition and mapping.relative to LIDAR.

Like if LIDAR costs 1,000x more to implement than video cameras but you get 10,000x more accuracy or require 10,000x less computing power to process the data it produces, the trade may be worth it.
Anyone's guess... progress seems to have slowed down a lot recently, companies like Waymo have lost a lot of their valuations, and that space is a bit disappointing relative to the expectation that we'd have consumer-ready AVs by 2020. As far as I know, the best self-driving features are still highway autopilots, like the Tesla Model S autopilot. Modeling and predicting highway conditions is mostly doable (and has been somewhat doable algorithmically for around 30 years), but maybe current AI paradigms can't handle the full complexity of city driving?

I don't know about the LIDAR vs camera input issue. My impression is the biggest issues currently are largely algorithmic--regardless of the sensors, there are too many conditions that the vehicle doesn't know how to handle. It could be because the condition was never seen in training, the condition is too rare for the vehicle's training functions to distinguish from statistical noise, the condition is too complicated for existing algorithms to model effectively, or the billions of hours of simulation data Waymo and friends use for training aren't working well. But I could be wrong and more of the challenges are on the sensor and signal processing side. For example, weather conditions, hence testing in Arizona, as you probably know.
 
Last edited:
How far off are self driving cars?

Mmmhh.... 20-30 years maybe.
In contrast to drones amd blockchain (or whatever else is hip) they will IMHO defintely come and have sn impact on everyone's life.

Regarding LIDAR, someone said that with LIDARs you can detect things, which you might not be able to classify visually. So seems to be a biiiiig safety feature.
 
How far off are self driving cars?

Do you all think LIDAR is necessary for self-driving or do you think ~normal cameras and AI software can handle the challenges of the road? I think Elon's betting that if the human eye and brain can get by without LIDAR, then cars should be able to as well. I just do not know how amenable visual data is it to pattern recognition and mapping.relative to LIDAR.

Like if LIDAR costs 1,000x more to implement than video cameras but you get 10,000x more accuracy or require 10,000x less computing power to process the data it produces, the trade may be worth it.

If both are mass produced on the same scale, the cost of a LIDAR sensor and a camera should be roughly similar. However, you are going to need the camera anyway (at least for road signs, traffic lights, etc.) so LIDAR will always be an additional cost.

Whether or not LIDAR is necessary depends on the error rate that would be acceptable. Relying on cameras will work most of the time, but there will certainly cases where it will not. Humans will also sometimes not see something, so if our only requirement is that self-driving cars are as safe as human drivers, a camera-only system might be sufficient. However, if our requirements are much stricter, then a second sensor system like LIDAR will be required at one point.
 
If both are mass produced on the same scale, the cost of a LIDAR sensor and a camera should be roughly similar.
I don't believe this. LIDAR is inherently more complicated I think and requires moving parts. That alone means it won't ever be as cheap as cameras.
However, if our requirements are much stricter, then a second sensor system like LIDAR will be required at one point.
You don't have to detect everything to be much safer, you just need the system to react appropriately in sub-optimal conditions. I do agree that a LIDAR-equipped car can be made safer more easily, I can even agree that a LIDAR-equipped car is more safe by default. But I don't agree its necessarily a requirement to have a much safer self-driving car.
 
I don't believe this. LIDAR is inherently more complicated I think and requires moving parts. That alone means it won't ever be as cheap as cameras.

It is possible to build a LIDAR in solid state without any moving parts. And the optics are inherently simpler than for cameras. If it was required for millions of cars, the production cost for LIDAR would plummet.

You don't have to detect everything to be much safer, you just need the system to react appropriately in sub-optimal conditions. I do agree that a LIDAR-equipped car can be made safer more easily, I can even agree that a LIDAR-equipped car is more safe by default. But I don't agree its necessarily a requirement to have a much safer self-driving car.

Let me reformulate: the question is whether society will accept deaths which could have been avoided with LIDAR.
 
Let me reformulate: the question is whether society will accept deaths which could have been avoided with LIDAR.
This is impossible to prove I think.

Maybe solid state LIDAR will scale but we're seeing issues with electrically-steered flat panels not scaling so it's not a given IMO.
 
This is impossible to prove I think.

Depends on what you accept as proof, but in my opinion those fatal crashes where Teslas did not see a trailer and hit it could have been prevented or at least mitigated with a LIDAR system.
 
How will self-driving cars handle the Trolley Problem?
https://medium.com/josh-cowls/ai-and-the-trolley-problem-problem-ef48582b49bf
There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:

Do nothing, and the trolley kills the five people on the main track.

Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the most ethical choice?

The problem has been extended to include a related case in which, instead of flipping a switch, the only way to save the five people is to push a nearby man off a bridge (since he is large enough to stop the trolley in its tracks). Research suggests that this alternative scenario causes many people to change their mind: many people are comfortable flipping the switch but not shoving the man. Though this implies a moral distinction between these two acts (despite their identical effect: sacrificing one life to save five), research suggests that the divergent attitudes ultimately result from a neurological distinction, as different parts of subjects’ brains were observed as controlling the decisions in the different cases.


I first encountered this problem watching The Good Place.

Heh, I see CFC:OT already addressed this in the past.
https://forums.civfanatics.com/threads/random-rants-80-computer-says-no.649399/page-10#post-15542809

Indeed, scientists at MIT’s Media Lab have launched an impressive attempt to experiment with just this question. The Moral Machine platform invites users to judge a series of hypothetical scenarios, making difficult decisions about the direction an out-of-control car should swerve. After answering a series of questions, the survey will rank a user’s implied “preferences” with almost disturbing granularity, in terms of gender, age, wealth, health, and much else.

How unsettling.
 

Slam the brakes and hope for the best. The trolley problem is a constructed problem and you will almost never encounter clear-cut examples of it in real life. So trying to implement behavior for this would be foolish, because you have very little data and it would be extremely hard to test. It's also risky, because you might have to argue counterfactuals in court (i.e. what would have happened if the car did not run over this person on purpose). The safe option would be to reduce speed as much as possible to minimize damage if there is no clear path available. I don't think anybody would take issue with that.
 
Top Bottom