Uber car - software problem

Commuting, Day rides, Audax, Incidents, etc.
Pete Owens
Posts: 1894
Joined: 7 Jul 2008, 12:52am

Re: Uber car - software problem

Postby Pete Owens » 14 Nov 2019, 5:39pm

[XAP]Bob wrote:Because this is a test vehicle - it is legally required to have a safety driver.

It did.
But if the legislators imagined that any "safety" driver could intervene in real time in an emergency situation they are deluded. A safety driver would be able to do things such as bring the car to a stop if it headed the wrong way up a one way street - or started driving cross country. Not to react quicker than a computer to a real time emergency situation.
Said driver should have a display (probably a hud) or identified objects, properly coded... would have been immediately obvious that the person has not been recognised

This is getting even more absurd - you are imagining playing a game of "spot the difference" on a high speed video. That kind of task involves system 2 thinking (ie slow conscious and deliberate) - "immediately" in these terms means several seconds - longer than it takes from first noticing and object to colliding with it.

Bmblbzzz
Posts: 3492
Joined: 18 May 2012, 7:56pm
Location: From here to there.

Re: Uber car - software problem

Postby Bmblbzzz » 14 Nov 2019, 7:12pm

It seems to me this was not problem of faulty software but of a mindset. It failed to react to the woman crossing the road because it had no classification for her, and that was because Uber had programmed it to only identify humans on crossings. It's not an AI problem so much as an Uber problem. As others have pointed out, a Google/Waymo vehicle would probably have identified her as something to be avoided even if not recognising her as a person, and brought the car to a stop.

User avatar
[XAP]Bob
Posts: 17730
Joined: 26 Sep 2008, 4:12pm

Re: Uber car - software problem

Postby [XAP]Bob » 16 Nov 2019, 9:43pm

Pete Owens wrote:
[XAP]Bob wrote:Because this is a test vehicle - it is legally required to have a safety driver.

It did.

no it didn’t.

It had a test engineer who had sufficient other duties that she could not be a safety driver, and therefore wasn’t watching the road.
The fact that the scum management try to claim she was a safety driver is irrelevant - she wasn’t a safety driver.
A shortcut has to be a challenge, otherwise it would just be the way. No situation is so dire that panic cannot make it worse.
There are two kinds of people in this world: those can extrapolate from incomplete data.

User avatar
[XAP]Bob
Posts: 17730
Joined: 26 Sep 2008, 4:12pm

Re: Uber car - software problem

Postby [XAP]Bob » 16 Nov 2019, 9:44pm

The hud concept is actually pretty easy for people to deal with - if something is, over the course of 7 seconds, repeatedly reidentitfied ( and thefire has the box around them change colour) then that’s easy to spot...
A shortcut has to be a challenge, otherwise it would just be the way. No situation is so dire that panic cannot make it worse.
There are two kinds of people in this world: those can extrapolate from incomplete data.

kwackers
Posts: 14745
Joined: 4 Jun 2008, 9:29pm
Location: Warrington

Re: Uber car - software problem

Postby kwackers » 17 Nov 2019, 9:24am

Bmblbzzz wrote:It failed to react to the woman crossing the road because it had no classification for her, and that was because Uber had programmed it to only identify humans on crossings.

Do you have any links that suggest this?

Whilst it would be possible for the AI to be taught to only recognise humans on crossings that would be extremely dodgy and imo extremely unlikely.
Most of these systems are taught by running bazillions of hours of real footage that already have the humans identified on them and repeated until they can recognise them all.
Some scenarios (i.e. this one) slip through the net for whatever reason. The person isn't identified and in this case the secondary auto braking system was (I believe) disabled.

It's quite possible that in the above scenario for whatever reason the training includes breaking the identification into two parts, humans and humans on crossings. I could imagine a scenario where people waiting to cross or on the far side of the road are treated differently depending on whether they're on a crossing or not (for example, but I claim no actual knowledge of this).

The main thing though is the bar for automatic cars is obviously much higher than for human drivers - and rightly so.
You'd be lucky to get a conviction for a human driver that hit a pedestrian under the same circumstances let alone be this newsworthy.

Bmblbzzz
Posts: 3492
Joined: 18 May 2012, 7:56pm
Location: From here to there.

Re: Uber car - software problem

Postby Bmblbzzz » 17 Nov 2019, 9:34am

kwackers wrote:
Bmblbzzz wrote:It failed to react to the woman crossing the road because it had no classification for her, and that was because Uber had programmed it to only identify humans on crossings.

Do you have any links that suggest this?

It's been stated by the US National Transportation Safety Board.
eg: https://eu.azcentral.com/story/money/bu ... 508011001/

kwackers
Posts: 14745
Joined: 4 Jun 2008, 9:29pm
Location: Warrington

Re: Uber car - software problem

Postby kwackers » 17 Nov 2019, 9:55am

Bmblbzzz wrote:It's been stated by the US National Transportation Safety Board.
eg: https://eu.azcentral.com/story/money/bu ... 508011001/

Cheers, that's quite interesting and far more detailed than the previous info I've seen (even if it does lead to an array of 'technical' questions).

Pete Owens
Posts: 1894
Joined: 7 Jul 2008, 12:52am

Re: Uber car - software problem

Postby Pete Owens » 18 Nov 2019, 3:49pm

kwackers wrote:
Bmblbzzz wrote:It's been stated by the US National Transportation Safety Board.
eg: https://eu.azcentral.com/story/money/bu ... 508011001/

Cheers, that's quite interesting and far more detailed than the previous info I've seen (even if it does lead to an array of 'technical' questions).

The array of technical questions is of secondary importance here.

The critical issue is that the emergency braking systems were deactivated during the development on the basis that this task could be performed by a human. The problem is with Uber for operating an developing in an inherently unsafe way not the human delegated an impossible task. typical of corporate entities they are trying to dump the responsibility on the individual for a systematic failure.

Bmblbzzz
Posts: 3492
Joined: 18 May 2012, 7:56pm
Location: From here to there.

Re: Uber car - software problem

Postby Bmblbzzz » 18 Nov 2019, 4:42pm

If the human's tasks had been limited to safety and supervision, they would probably have spotted the woman, realized that the car wasn't reacting to her and braked as a human. If the car's systems had been fully operational, it would probably have been safe to limit the human's tasks to analysis. But giving the human both tasks at once was not safe. That Uber did this is symptomatic of the same attitude which led them to release the vehicle with the emergency braking nonfunctional and without the ability to detect people crossing at random points. Multiple small pieces of evidence all pointing to Uber's attitude.

Pete Owens
Posts: 1894
Joined: 7 Jul 2008, 12:52am

Re: Uber car - software problem

Postby Pete Owens » 18 Nov 2019, 5:07pm

[XAP]Bob wrote:The hud concept is actually pretty easy for people to deal with - if something is, over the course of 7 seconds, repeatedly reidentitfied ( and thefire has the box around them change colour) then that’s easy to spot...

This is getting even more absurd.

I thought you wanted the safety driver to be fully concentrating on the road ahead. Now you seem to expect them to be debugging software in real time! This is an order of magnitude more difficult (and slower) and would leave absolutely no time to pay any attention at all to the road.

First of all this is fundamentally misunderstanding how humans perceive head up displays. It is often imagined that because the display appears in the same field of view that humans are able to concentrate on both at the same time. This is simply not the case due to intentional blindness (the invisible gorilla). If you you are paying attention to the display you will simply not notice what is going on the the background. There was a classic case where huds were being developed to highlight the line of runways to help airline pilots approach airports. When this was tested in a simulator they did find that the pilots were more accurate in their approaches - until they put a jumbo jet taxiing across the runway and a significant number of the pilots failed to notice it at all.

The next issue is information overload. At 7 seconds at 40mph the victim would be 125m away and on the pavement. Even if the driver could actually see them at all at night from that distance they would be unlikely to be able to correrctly identify them. And there will be a shed load of other objects of greater or lesser significance in the field of view closer than 125m (most of them imperceptible to the human eye). It is probably more than 7s concentrated effort to validate all the objects the first frame of the display. Not that any human driver is going to attach any significance at all to a pedestrian on the pavement on the other side of the road at that distance.

Next the driver would need to understand that the changing identity of the object was in any way problematical- after all it is trying to avoid hitting any object whatever it is. (ie they would have to know in advance the specific fault in the software). It is to be expected that as a system gathers more information that the understanding of what an object is and where it is going will change. How on earth would the driver be expected to know that at each reclassification the computer would discard all previous information on the object's trajectory and start from scratch. And remember this is just one of many objects that the computer is monitoring.

Pete Owens
Posts: 1894
Joined: 7 Jul 2008, 12:52am

Re: Uber car - software problem

Postby Pete Owens » 18 Nov 2019, 5:31pm

Bmblbzzz wrote:If the human's tasks had been limited to safety and supervision, they would probably have spotted the woman, realized that the car wasn't reacting to her and braked as a human.

To see why that is simply not possible take a look at the stopping distance chart from the highway code:
Image

At 40mph the stopping distance for a human driver is 36m - but note the first 12m of that is thinking distance - braking only occurs in the final 24m. This means that there is no absence of reaction for the human to begin to notice until they are within 24m of the collision by which time it is already much too late.

fastpedaller
Posts: 2576
Joined: 10 Jul 2014, 1:12pm
Location: Norfolk

Re: Uber car - software problem

Postby fastpedaller » 18 Nov 2019, 8:29pm

The day that a computer will spot a 'mud on road' sign and slow in anticipation is probably a long way off. Likewise the children near the school who may run into the road. Maybe the IA can anticipate though? I doubt it.

kwackers
Posts: 14745
Joined: 4 Jun 2008, 9:29pm
Location: Warrington

Re: Uber car - software problem

Postby kwackers » 18 Nov 2019, 9:27pm

fastpedaller wrote:The day that a computer will spot a 'mud on road' sign and slow in anticipation is probably a long way off. Likewise the children near the school who may run into the road. Maybe the IA can anticipate though? I doubt it.

The day a driver will spot *any* road sign will never come.
OTOH the computer will always be 100% observant so road signs probably aren't that useful to them.

Entering the area I live there's a huge "20mph zone" sign plus the usual 20mph repeaters.
Every time the police put up a speed trap and nobble a not insignificant number of drivers the number of people who claim they didn't know it was a 20 or say it's unfair because the signage isn't good enough beggers belief.

I honestly don't know why we have road signs because it turns out nobody pays them any attention.

Pete Owens
Posts: 1894
Joined: 7 Jul 2008, 12:52am

Re: Uber car - software problem

Postby Pete Owens » 19 Nov 2019, 12:08am

fastpedaller wrote:The day that a computer will spot a 'mud on road' sign and slow in anticipation is probably a long way off. Likewise the children near the school who may run into the road. Maybe the IA can anticipate though? I doubt it.

Are these things you have noticed in ape controlled vehicles?

I would expect the computer to have the processing power to track the trajectory of every single child, constantly adjust the speed when they are approaching sufficiently closely that a child would have time to get into the braking envelope, and take avoiding action within a fraction of a second of a child starting to run into their path; a human driver would simply offer the excuse that a child appeared from nowhere giving them no time to stop. The policeman would nod in sympathy and blame the child for not looking.

kwackers
Posts: 14745
Joined: 4 Jun 2008, 9:29pm
Location: Warrington

Re: Uber car - software problem

Postby kwackers » 19 Nov 2019, 9:33am

Worth pointing out too that computers are particularly good at spotting signs.
It's probably the easiest thing you can program AI to do, just that for a lot of reasons the signs are often pointless to them.

For example most 'warning' signs are really saying "Wake up dude! Pay attention!" which for your average AI means nothing since they're already paying attention.