[RD] Self-Driving Uber Car Kills Arizona Pedestrian

So yeah...under most circumstances anti-tank missiles probably wouldn't even notice cars while passing through them, unless they hit the engine or gas tank.
 
I dunno why they would notice the gas tank. I mean, they'd probably ignite the gas as they passed through, but the shell of a gas tank is no closer to armor than the car door is. The only thing in a car with any stopping power at all is the main drive line components.
 
I dunno why they would notice the gas tank. I mean, they'd probably ignite the gas as they passed through, but the shell of a gas tank is no closer to armor than the car door is. The only thing in a car with any stopping power at all is the main drive line components.

Well I was defining the resulting explosion/fire as something that would be noticed but yes, you're right.
 
Image of missile looking over its shoulder at exploding car, grinning, saying "I noticed all right!"
 
mmmm.... a new market for hacker-based contract kills...

Since you can have programming for cruise control/brakes/door locks now this is technically already in play. There are probably easier ways for the moment though.

I would take a ditch* at 65 with my wife and son in the car rather than hit a pedestrian at that speed. We're in a crumple tank and harness and stand a fair chance. The pedestrian has very little. If I remember my numbers approximately, outcomes for buckled/crumpled/airbagged passengers at about 35mph are pretty solidly good. Outcomes for pedestrians at 35mph start becoming remarkably poor. I'd consider it unforgivable to do the alternative with intent. I guess the law might not, but yeesh man, we gotta do better than that. Or are we talking something hairbrained, like head-on into oncoming traffic? Or three off a cliff for one?

*Not a tree, not a powerline, ditches around here tend to be mown if not interstate-median-wide

I'd be surprised if a machine could consistently react ideally to each given scenario, but we have pretty decent evidence people can't either. Between noticing and physically doing much of anything useful is pushing half a second for humans, processing which action best fits your utility function as opposed to simply executing a conditioned response is pretty implausible.

It is quite possible that a self driving car with serious telemetry would have been able to demonstrate, based on what she actually did, that I/it could have missed her. I actually asked the cop who took the report whether it seemed to him that if I had just broken to the right immediately whether I'd have had room. His response was "if you had sure as .... she'd have seen you and hit the brakes and right now you'd be wondering why you did that."

In such a case both you and a theoretical machine have to make a decision with necessarily incomplete information in a fraction of a second. Since the machine's reaction time is physically better, maybe it really can safely make a swerve behind in a small percentages of scenarios "like this" where you or I couldn't, but generally speaking it would be more on the order of "reduce speed so the impact if the person doesn't stop is minimized".

Anyway this incident demonstrates that reality isn't nice. I expect that these vehicles eventually result in a significantly reduced mortality rate, but it'll still be non-zero. If these things demonstrate accident rates many times less frequent than humans I'll at least do my best to avoid the delusion that my decision-making in < 500ms is better than an optimized algorithm that outperforms most of humanity.
 
You can't always buy yourself more than half a second, but if you operate under the assumption that drivers that can make mistakes will make mistakes, that traction that can be bad will be bad, you can buy yourself a lot more than 0.5 an awful lot of the time. Then again, it's driving. I'm used to highway speeds questionable weather. I suppose low speeds heavy traffic density or rat maze heavy traffic high speed is a little less forgiving in that there simply isn't the space to be defensive as often. If they're replacing wasted idiots in high density traffic in year round sun high infrastructure states, I guess w/e. We'll wind up with the solutions whether they fit or not, likely.
 
In such a case both you and a theoretical machine have to make a decision with necessarily incomplete information in a fraction of a second. Since the machine's reaction time is physically better, maybe it really can safely make a swerve behind in a small percentages of scenarios "like this" where you or I couldn't, but generally speaking it would be more on the order of "reduce speed so the impact if the person doesn't stop is minimized".

Anyway this incident demonstrates that reality isn't nice. I expect that these vehicles eventually result in a significantly reduced mortality rate, but it'll still be non-zero. If these things demonstrate accident rates many times less frequent than humans I'll at least do my best to avoid the delusion that my decision-making in < 500ms is better than an optimized algorithm that outperforms most of humanity.

It wouldn't have taken "machine like reaction time." The reality (which certainly isn't nice) is that if I had KNOWN she was never going to look I could have gotten around behind her between the back of her car and the curb and very probably then gotten back into the intersection before I reached the curb on the other side. But what the other driver is going to do is always an independent variable until ALL the cars are running on the same algorithm.

That's the problem that I am not sure has a solution. Just driverless cars would certainly be safer than just human piloted cars, but in a mix of both there might not be much measurable difference. And if you go "full auto" and eliminate human pilots altogether there are much safer and more efficient ways to move people around than in individually piloted vehicles.
 
I suppose low speeds heavy traffic density or rat maze heavy traffic high speed is a little less forgiving in that there simply isn't the space to be defensive as often.

Yes, and incidentally this is why engineering safer roads and sidewalks is actually by far the best solution to the problem of pedestrians being hit by cars. If we wanted to spend the money we could drastically reduce traffic fatalities this way.

And if you go "full auto" and eliminate human pilots altogether there are much safer and more efficient ways to move people around than in individually piloted vehicles.

Quoting this for emphasis and because I'd love to eliminate individually-piloted vehicles entirely
 
Here’s how I would decide what to do if the decision has a high chance of injuring pedestrians or passengers. If the risk of injury or death is the same, the number of victims in each outcome is the same, and the severity of injury is the same, then make the decision to save the passengers while sounding horn to give pedestrian chance to react. Otherwise, minimize likely deaths and number of injuries. This would mean a jaywalker (or cyclist) will be hit before a car with passengers drives off a cliff. However, a car would sooner risk minor injuries for a full car than kill a jaywalker.

By the way, I read somewhere that it would only take 20% of autos to be autonomous to reduce traffic accidents by 80%.
 
Here’s how I would decide what to do if the decision has a high chance of injuring pedestrians or passengers. If the risk of injury or death is the same, the number of victims in each outcome is the same, and the severity of injury is the same, then make the decision to save the passengers while sounding horn to give pedestrian chance to react. Otherwise, minimize likely deaths and number of injuries. This would mean a jaywalker (or cyclist) will be hit before a car with passengers drives off a cliff. However, a car would sooner risk minor injuries for a full car than kill a jaywalker.

Are you sure your outcome follows from the stated logic? "Minimize likely deaths AND number of injuries" says that the higher total number of likely injuries, even if "minor," would lead to a "there's only one pedestrian so tough luck for him" result. What seems to be the ultimate "morality in logic" has to weight likely severity of the injuries as well as accounting for the likely total number...and the day your driverless car says "well, your son may have lost a leg, but that's relatively minor as the pedestrian almost certainly would have died" your sales are going to tank.
 
Would developing and testing of programs not be much better done in a virtual reality than RL in Arizona ?
In a virtual reality you can have a much higher rate of difficult events per car and it will be cheaper because PC's are cheaper than cars and human control drivers.
And yes, it will take some time to build up a big library of events that mirror RL human driven cars, but this project will take a very long time anyway.

It will also enable an objective quality standard before updates of programs would be allowed to be used in RL up to competitive performance indicators like now with crash tests.

What is this Arizona playground then more than an interface to the newsmedia and general consensus ?
Usefull and necessary as it is
 
By the way, I read somewhere that it would only take 20% of autos to be autonomous to reduce traffic accidents by 80%.

Objection, speculation.

Sustained.

Sounds good, but that falls deeply into "no way to predict" territory.
 
Cars have the kinetic energy, so imo their drivers should have the responsibility to avoid collisions. Actually, society used to work that way before a concerted political campaign by car manufacturers in the early 20th century to shift the US to a car-centric culture where pedestrians would be blamed for existing.
 
personally I think cars are utter garbage and should have been revamped decades ago. the mere fact that we're still using Diesel and gasoline to power our vehicles instead of electricity seems insane to me. I only drive when I absolutely have to, which is like once every other year. otherwise I exclusively walk or use public transportation, mostly train.

I have no problem with self-driving cars and I think they could help a lot in reducing the number of accidents.

I think I'm much better than the average driver, and so does everyone else.

flat out wrong. I know for a fact that I am a horrible driver and so does my girlfriend, and so does my best friend and my cousin and.. anecdotal evidence, I know, but you did say "everyone else". maybe among older generations this idea that "I am the only sane person on the road, everyone else is an idiot" is still prevalent, but within my circle it isn't. almost all the people I know that do drive do so very carefully, rarely cross the speed limit and are just incredibly safe drivers in general. of course this is Germany, things differ depending on country.
 
flat out wrong. I know for a fact that I am a horrible driver and so does my girlfriend, and so does my best friend and my cousin and.. anecdotal evidence, I know, but you did say "everyone else". maybe among older generations this idea that "I am the only sane person on the road, everyone else is an idiot" is still prevalent, but within my circle it isn't. almost all the people I know that do drive do so very carefully, rarely cross the speed limit and are just incredibly safe drivers in general. of course this is Germany, things differ depending on country.

I think surveys (of the US, no idea about other countries) consistently show that many more than 50% of drivers consider themselves above-average.
 
Yeah, are you going to think you're better than the lastest super quick AI though? I don't think most people will, particularly when shown the evidence, which will undoubtedly form a major part of the marketing for such vehicles.

I'll see that professional rally drivers are still better than AI, and I'll assume that so am I.

Oh right... suddenly I'm advocating for a "police state" because I don't agree with you that there is only one "reasonable" choice. Conversation over then I guess, can't be bothered with this :)

A jurisdiction that only allows travel via non-free, closed-source, automated software that doesn't prioritize passenger safety is only achievable via police state levels of enforcement.

You make a good point, but it's not like those legislative changes are impossible to do. They could happen, if it is decided that it is best to do that. And traffic laws will have to be updated anyway before self-driving cars hit the roads everywhere.

Yes, I expect those legislative changes to happen. And I expect them to happen in pretty much the way that the industry wants them.

"Pretty certain" you say. So let's say that carries a 5% chance of death or serious injury to you or an occupant, whereas hitting the pedestrian carries a 95% chance of death or serious injury to the pedestrian. Assuming the car can't see the future, what should it choose in this circumstance? What if it chooses the ditch, but hits some large rock, flips and impales you on a fence post? Do you still think the car made the wrong choice?

Hmm it's almost like... like it's quite a complicated moral (and practical) maze to navigate and there isn't only one obviously correct choice after all...

Hit the pedestrian, easy. Any more hard questions?

This might be an unrelated philosophical segue, but wouldn't a self-driving car's programming be deterministic to some degree?

If you're implementing machine learning, it could be constantly changing, and it wouldn't be clear which input variables would trigger different effects. And I don't see any obvious problem with having truly random behaviour in many cases.

Would developing and testing of programs not be much better done in a virtual reality than RL in Arizona ?
In a virtual reality you can have a much higher rate of difficult events per car and it will be cheaper because PC's are cheaper than cars and human control drivers.
And yes, it will take some time to build up a big library of events that mirror RL human driven cars, but this project will take a very long time anyway.

It will also enable an objective quality standard before updates of programs would be allowed to be used in RL up to competitive performance indicators like now with crash tests.

What is this Arizona playground then more than an interface to the newsmedia and general consensus ?
Usefull and necessary as it is

VR graphics aren't relevant to testing the software. You can accomplish just as much with command line tests. You can be sure that autonomous car developers are investing as much as they possibly can into automated software testing.
 
Plenty of people swerve to their deaths to avoid small animals in the road.
Yep... unless of course they have (or know someone who has) some experience with flipping cars trying to avoid animals versus running them down like literal dogs in the street... which I do.
Well, ok, but that's harebrained. :p

Edit: In the interest of desired verbosity - specifically, sacrificing the safety of the passenger in all situations is harebrained.
Sure, but that was the hypothetical that was being posed I thought? I understand that its oversimplifying, because there are nuances and all that... but the question seemed binary. Who's life/safety do you prioritize? Passenger or pedestrian? The response was "Passenger, obviously", which was met with "Nuh-uh! Probabilities! Circumstances! Morality! Split second decisions! Anecdotes!" all of which were valid points. My point remains that if its an either or... like in a sales situation for example, cause that's what we were talking about... you ain't selling no cars telling people that "Well in a road-hazard situation, the car will conduct billions of probability calculations based on thousands of gigabytes of data and make a split second determination whether to prioritize your daughter's life, or the life of the pedestrian who has jumped in front of her car, and if in the calculation of the computer there is a greater risk of injury to the pedestrian the computer may make a probability judgment that your daughter's life will have to be risked to avoid harming the pedestrian." You're selling zero cars with that approach. I'd train my designers/salemen to say "This car will always place your family's safety first and take whatever evasive or corrective maneuvers necessary to maximize your chance of surviving the crash or hazard in the road."
"Pretty certain" you say. So let's say that carries a 5% chance of death or serious injury to you or an occupant, whereas hitting the pedestrian carries a 95% chance of death or serious injury to the pedestrian. Assuming the car can't see the future, what should it choose in this circumstance? What if it chooses the ditch, but hits some large rock, flips and impales you on a fence post? Do you still think the car made the wrong choice?

Hmm it's almost like... like it's quite a complicated moral (and practical) maze to navigate and there isn't only one obviously correct choice after all...
Nah, there's a correct choice. But we disagree, that's cool. You do you. I'm buying the car that prioritizes my family. You buy whatever you want.;)
 
I'll see that professional rally drivers are still better than AI, and I'll assume that so am I.

Rally driving is a different skillset than regular driving, and perhaps ironically this scenario would have markedly fewer variables for an AI (deemphasizing its obvious limitation) while emphasizing its strengths (reaction time, precision, ability to calculate conditions of vehicle and road).

Many people think they are better than average. Only the most proficient know it and why. But what they think is irrelevant compared to their rate statistics. I prefer to credit reality, and when we start seeing machines consistently outperform humans at driving it's time to switch, assuming we don't have a better transportation model (multiple posters have pointed out that individual cars going around might not be the best model).
 
VR graphics aren't relevant to testing the software. You can accomplish just as much with command line tests.

You have to help me with understanding that... I am struggling with what command line tests can do
The accident Tim described.... how do you test that with command line tests ?

You can be sure that autonomous car developers are investing as much as they possibly can into automated software testing.

Yes
But an common VR testing ground can give us objective standards and performance indicators (like crash tests) that feed into regulations, courts and consumer quality competition.
Would the attached consensus for acceptance of some accidents not be necessary to get a transformation without hick ups ?
 
You have to help me with understanding that... I am struggling with what command line tests can do
The accident Tim described.... how do you test that with command line tests ?

At least for OP scenario, did the program detect something, and respond improperly, or did it fail to detect something? Presumably, these cars can already detect objects in front of them or we'd expect to see them crashing into people, cars, and especially novel debris on the road often, rather than the first time in years even in a controlled environment.
 
Top Bottom