Tech companies leaving San Fran for Midwest

Yeah but cars will get there pretty soon. All road conditions can be detected if your sensors are sophisticated enough. You say they can't detect ice and stuff, I'm pretty sure traction control on a car knows it's slipping before the driver does.

The only way a human driver is going to be better than an AI is if the AI algorithm is flawed in some way, like a human driver reacts differently and more correctly for the situation than the AI's programming. That will happen some I'm sure but the computers should learn in those situations to prevent it from happening next time, while a human driver potentially never learns.

Rather, the flaws in the AIs algorithm have to be less costly than the flaws in the average human's algorithm (how many HUMANS have an immediately available perfect reaction to very unusual scenarios?). I'm going to presume both exist until evidence suggests believing otherwise.
 
If you claim you can predict everything that happens in chess, I expect that you can out yourself as Magnus Carlson or trivially destroy him at it shortly.

If you concede that something unexpected might happen based on non-omniscient experience playing chess, there is no meaningful distinction between it and driving based on your argument.

No matter what situation is involved, the number of things a vehicle can safely do are finite. If the AI makes the choice that gives humans least risk more consistently than the average human driver, it is outperforming human drivers.

You are missing the point. Everything in chess is predictable in principle. It is trivial for computers because all the necessary information to predict all possible gamestates is always available, which is why computers can easily beat Magnus Carlsen. There is a lot of stuff in driving that is unpredictable in principle.

No matter what situation is involved, the number of things a vehicle can safely do are finite. If the AI makes the choice that gives humans least risk more consistently than the average human driver, it is outperforming human drivers.

I know that, and perhaps ironically I actually agree with you that self-driving cars will soon be A Thing (and because of the less-risk thing I think they will rapidly price human drivers out of common existence), I just think chess is a misleading and incorrect analogy. AI can play chess very well because everything in chess is predictable in principle and there is no unknown information. That analogy breaks down when you apply it to a game of Civilization where there is fog of war, let alone something like driving.
 
You can probably use discreet increments if they're small enough (faster than the fastest human reaction times, for example). The analog issue is still nontrivial though.

As the discreet increments become smaller the range of the variable effectively grows. Choosing between wheel positions of zero, plus or minus thirty, and plus or minus forty-five is a snap. Using discrete increments of a single degree turns five possibilities into 91, increasing complexity eighteenfold. And that's probably not a small enough increment, and it's in a single variable among a whole lot of variables. Introducing multipliers of twenty into a fair number of these variables increases complexity well beyond current bounds. Chess is complex, but at the end of the day there are only sixty-four squares.

Personally, I think chasing AI cars is just a wrong approach from the gate. Human ineffectiveness in controlling cars doesn't imply that replacing human control is the best answer, because by and large humans are pretty good at the task to begin with given the incredible complexity of the task. Developing an automated transport system that avoids using cars altogether would be more effective. The Roads Must Roll!!!
 
Chess is complex, but at the end of the day there are only sixty-four squares.

There's something to learn from that though, as the number of theoretically possible legal moves is absolutely enormous, far beyond any computer we've built's ability to calculate. Programmers are constraining those possibilities too.

Developing an automated transport system that avoids using cars altogether would be more effective.

Quite likely, especially if you ignore switching costs and are building a system from scratch to utilize it.

You are missing the point. Everything in chess is predictable in principle.

For calculations using Newtonian physics, the same is true for driving. Maybe physics in general, but that debate is still active. Neither is actually possible to predict in totality using computers in human lifetimes...

There is a lot of stuff in driving that is unpredictable in principle.

That is an extraordinary claim, I would love to hear all this stuff that is so unpredictable that it's impossible to do so, even *in principle*!
 
Last edited:
That is an extraordinary claim, I would love to hear all this stuff that is so unpredictable that it's impossible to do so, even *in principle*!

Serious question - have you ever actually driven a car?
 
Other drivers. <mic drop>

Human choices are "predictable in principle" though, especially within the constraints of driving. We can't actually do it, but then we can't actually predict all move possibilities in chess either. Both require such absurd levels of computational ability that it's not happening. So that's not the real limiting factor, the limiting factor exists but is something else.
 
If you are driving a car you cannot predict whether some kid is suddenly going to take it into their head to dart into the road in front of you. You cannot predict if a deer will suddenly run across the road. You cannot predict whether the driver in front of you will suddenly stop.

The whole reason we have speed limits, and they're lower in urban areas with more going on, is to allow you to stop in time to avoid hitting things that are unpredictable. One of the major reasons I anticipate self-driving cars will be superior to human drivers is that they will be far better at doing this and will never speed up because they like going fast.
 
Human choices are "predictable in principle" though, especially within the constraints of driving. We can't actually do it, but then we can't actually predict all move possibilities in chess either. Both require such absurd levels of computational ability that it's not happening. So that's not the real limiting factor, the limiting factor exists but is something else.

They may be limited in range, but that doesn't make them predictable. To be predictable you have to be able to come up with a predictive system.

As I am approaching an intersection where I have no stop sign and a car is sitting at the stop sign waiting to enter from the right the only prediction I can make is that if I am too close they will wait for me to pass. The reality is that they might not (trust me, tested this one) but there is no predictive system that can tell me, among the hundreds or thousands of similar situations, in which one they will for whatever reason pull out in front of me.

Since there is no predictive system the developer of the AI has two choices. Use the demonstrably flawed system that assumes the other car will always "do the right thing" and just proceed at speed, which is a choice with very harsh consequences (again, trust me, tested this one). Or it can approach every such situation frantically calculating corrections based on the acknowledged possibility that the car will pull out. This sidetracks tremendous computational power into a dead end.

There was a time when chess AI could be beaten fairly handily by the simple expedient of opening with a rook's pawn. The move was so obviously bad that the AI had no capacity for dealing with it, because it reasonably predicted that it would never happen and did no analysis along the (dead end) lines presented. As long as there are other cars controlled by humans AI drivers will have this unpredictability problem coming at them in a repetitive stream. So the real key to AI drivers is that they have to be universally implemented to work...and again, if you are doing away with human drivers completely there are much more efficient ways to have AI managed transport than using discrete vehicles.
 
:rotfl:
Sorry, but have you been paying attention to technical news? Safer?! All general purpose processors coming equipped with embedded processors inside, full of vulnerabilities, some of those already show to be remotely exploitable? That's one innovation. Older that 5 years, but it became pervasive in the meanwhile.
"I was examined by the doctor, he found out I'm sick and started treatment."
Inno: "Oh, it would have been better not to go to the doctor then you would have still been healthy."

The rest of your post is already adequately covered by this (I'll note in passing that original post said "Name one" and you've conceded HTML5, so I win):
Grumpy old people have been saying this for decades, and they've been wrong for decades.

First of all, you're presenting a Civilization-like vision of discrete technological development where we discovered Flight in 1903 and then progress stood still for a few decades until we discovered Advanced Flight in the 40s/50s. That's not what happened in reality. The same holds for current developments. Computer technology has been getting faster, safer, more efficient and generally better of the last 5 years.

Second, even if you'd want to look at a Great Man Theory of the history of technology, 5 years to get a useful tool is rather ridiculous. Getting back to Flight, 5 years after the Wright brothers had made their first flight the Netherlands hadn't even seen its first aircraft. If you'd asked "Name me one big useful tool or technology that came out of there in the last 5 years." in 1908 and someone answered flight, your predecessors would surely nitpick that flight was not useful. The CCD was invented in 1969, the first commerical digital camera was made in 1975 (already more than 5 years!) but it took until the '90s and '00s for the digital camera revolution to happen.

Re: everyone talking about self-driving cars
I'd recommend this article: https://arstechnica.com/cars/2017/1...-a-reality-in-2017-and-hardly-anyone-noticed/

I also don't think the comparison with chess is particularly useful. Computers beat humans in chess two decades ago, which is ages in terms of computer development. And that was beating the best human in the world, the self-driving car doesn't have to be the best driver in the world to be a huge improvement on the roads.
 
Last edited:
I apologize for using the term “explosive growth” when the truth is that population in the Great Lakes region had been declining until recently and is currently experiencing modest growth. Then again, a change from declining to modest growth has a high 3rd derivative of population. Also, while there are good universities in California that draw people from around the world, the public schools (k-12 and community college) in San Francisco are underperforming.

Here is an article though I previously read an older but more in depth article in a magazine a while ago.

https://www.mercurynews.com/2017/01/05/california-schools-earn-c-in-national-ranking/
 
As I am approaching an intersection where I have no stop sign and a car is sitting at the stop sign waiting to enter from the right the only prediction I can make is that if I am too close they will wait for me to pass. The reality is that they might not (trust me, tested this one) but there is no predictive system that can tell me, among the hundreds or thousands of similar situations, in which one they will for whatever reason pull out in front of me.

It's not a perfect predictive system, but neither is what happens in the algorithms for chess by human beings. You don't know whether that car is going go pull in front of you, or if it does when. It's still constrained in possibility space. The car won't start flying, roll sideways on a flat surface absent something striking it in that situation, or morph into an elephant and roll towards you at a 45 degree angle. It might or might not pull in front of you, but apparently this is something you anticipate as a possibility after all, while flight and elephant transformations are not reasonable to bother modeling.

Since there is no predictive system the developer of the AI has two choices. Use the demonstrably flawed system that assumes the other car will always "do the right thing" and just proceed at speed, which is a choice with very harsh consequences (again, trust me, tested this one). Or it can approach every such situation frantically calculating corrections based on the acknowledged possibility that the car will pull out. This sidetracks tremendous computational power into a dead end.

If all vehicles are self-driving cars you can get away with something close to 1. If not, you can constrain its computations and force it to somewhat reduce speed, reacting to motion by the stopped vehicle (if the driver in that vehicle really wants to pull out at an awful time, few people behind the wheel could react even knowing it's possible).

When driving, you don't have that many options. You can (maybe) shift gears, accelerate to varying degrees, use the brakes to varying degrees, turn the wheel to varying degrees, and use support systems like signals/headlights (AI should basically always signal per training and use headlights appropriately though). What does a human do in this situation? Varies by human, with a significant subset assuming the other car will do the right thing...or not paying attention and acting that way regardless of considering it. AI can hedge this situation by reducing speed in uncertainty, and has the benefit of reacting much faster than humans to brake or turn the steering wheel (an amount that won't flip the vehicle or lose control, contingent on other vehicles present, something it can compute in a timeframe humans can't).

There was a time when chess AI could be beaten fairly handily by the simple expedient of opening with a rook's pawn. The move was so obviously bad that the AI had no capacity for dealing with it, because it reasonably predicted that it would never happen and did no analysis along the (dead end) lines presented.

Yes, you do want to test your AI past human capability before rolling this out. Even by the early 90's chessmaster programs commercially available had no trouble with rook's pawn opening and would just play d or e pawns and trade material if the rook was still advanced.

One of the major reasons I anticipate self-driving cars will be superior to human drivers is that they will be far better at doing this and will never speed up because they like going fast.

They will also not "feel surprise" at such events, nor will they on average have a 250ms delay just to acknowledge something is happening (average time it takes a human to click on a dot that appears on a screen)...with likely double that time again before inputs of any kind are reliably applied to the vehicle...and the AI can consistently be precise within the vehicles capabilities.

As you say the anticipation of things that require sudden reaction would be a bigger challenge. At the same time, you're trying to illustrate the unpredictable nature of something by *actually predicting* events with known occurrences in the past. These are not good examples, since they're actually anticipated and have enormously different probabilities given visual inputs! You're not going to get deer or children suddenly darting onto the road while driving on a desert road in Nevada with no obstacles in sight. Even as a heuristic, speed can be reduced as the proximity of obstructions to the road has smaller distances.

There are also ways to predict that someone in front of you will stop, but an AI doesn't need this. Unlike a large portion of human drivers, AI programmed to use x following distance at y speed will actually use x following distance, and again it should trivially outperform humans against sudden stoppages. I'd me more concerned in odd scenarios like water running over a road or "what to do if a tornado is coming up on you", because these don't have obvious, consistent safe driving algorithms to avoid issues with them.
 
I also don't think the comparison with chess is particularly useful.

The comparison to chess has been long since dispatched, since there is basically no comparison between playing chess and driving a car...whether it is a human or a computer trying to do it. It's like saying that since my calculator can solve math problems it should be great at picking flowers.
 
The comparison to chess has been long since dispatched, since there is basically no comparison between playing chess and driving a car...whether it is a human or a computer trying to do it. It's like saying that since my calculator can solve math problems it should be great at picking flowers.
That's because calculators are lazy. Stupid lazy calculator, sat there for like 3 hours in front of my garden and did absolutely nothing.
 
The comparison to chess has been long since dispatched, since there is basically no comparison between playing chess and driving a car...whether it is a human or a computer trying to do it. It's like saying that since my calculator can solve math problems it should be great at picking flowers.

Not quite so extreme (in both chess/driving you would need a human-created algorithm to constrain computations to something a machine can handle), but yes it's overstayed its welcome in terms of relevance...though it was amusing to get examples of scenarios that were actively predicted described as "unpredictable"
 
When driving, you don't have that many options. You can (maybe) shift gears, accelerate to varying degrees, use the brakes to varying degrees, turn the wheel to varying degrees, and use support systems like signals/headlights (AI should basically always signal per training and use headlights appropriately though).

This reverses the issue of the decision tree. Complexity isn't about the limitations on the outcomes, it is about the ranges of and numbers of inputs.

Back to the car that "can pull out at any time," not even at discrete intervals, but it actually can at any time. So it has to be continuously accounted for in our algorithm. When constraining computations to something a machine can handle there is a strong temptation to reduce that accounting for something that isn't really much more likely than the waiting car turning into an elephant, but while it definitely can't turn into an elephant the possibility of it pulling out is only negligibly probable, not impossible.

So every car within range has to be assessed continuously, unless we want to presume that they will all just continue "doing the right thing." Continuous monitoring of all those variables and preparation for all the improbable things that any one of them might do will exceed computing power limitations, but ignoring the negligibly improbable runs afoul of the sheer number of tests, because if you drive long enough no matter how improbable something is it inevitably does happen eventually.

Ultimately though, the real problem is this...

When the car pulled out in front of me I hit it. They were deemed to be responsible and agreed that they were, and in a mostly amiable resolution their insurance company paid for all damages to them and eventually got around to paying for all damages to me. A very finely tuned AI put in my place...would almost certainly have hit them just the same, unless it was programmed to stop when it didn't have a stop sign 'just in case.' Now, how confident are you that the AI, lacking the ability to be an eyewitness, would have been found to be not responsible as easily and amiably as I was?
 
When constraining computations to something a machine can handle there is a strong temptation to reduce that accounting for something that isn't really much more likely than the waiting car turning into an elephant, but while it definitely can't turn into an elephant the possibility of it pulling out is only negligibly probable, not impossible.

This is correct of course, but what's the anticipated accident rate once you're acting in the bounds of "negligibly probable"? If we actually got the accident rate that low it would be an improvement on the present rate by a margin.

As you and I have both said, it doesn't need a 0 accident rate, it needs to outperform humans. I strongly doubt most humans even consider the negligibly probable outcomes, let alone consider them and have the capacity to react usefully should they occur.

A very finely tuned AI put in my place...would almost certainly have hit them just the same, unless it was programmed to stop when it didn't have a stop sign 'just in case.' Now, how confident are you that the AI, lacking the ability to be an eyewitness, would have been found to be not responsible as easily and amiably as I was?

Why should you have less evidence regarding the AI driving than when you were? Unless it was damaged there's a good chance you'd have more. It's going to need those variables you mentioned earlier just to function...I'd estimate they could double as evidence with *at least* as much reliability as an eyewitness human. Sure, it can break from an accident...but so can a human.
 
Why should you have less evidence regarding the AI driving than when you were? Unless it was damaged there's a good chance you'd have more. It's going to need those variables you mentioned earlier just to function...I'd estimate they could double as evidence with *at least* as much reliability as an eyewitness human. Sure, it can break from an accident...but so can a human.

Ah, but it isn't about having evidence, it's about presenting it...unless you want to shift the conversation to AI lawyers.
 
So, your telling me that not everyone wants to pay $4000 a month to live in a cardboard box. Interesting indeed....

Well, if someone is willing to pay me enough that I can afford the $4000 a month and the box is in a really nice place I don't see the problem, truthfully.
 
Back
Top Bottom