Uber car - software problem

Commuting, Day rides, Audax, Incidents, etc.
xjs
Posts: 21
Joined: 29 May 2015, 11:47pm

Re: Uber car - software problem

Postby xjs » 19 Nov 2019, 10:21am

Pete Owens wrote:
kwackers wrote:
Bmblbzzz wrote:It's been stated by the US National Transportation Safety Board.
eg: https://eu.azcentral.com/story/money/bu ... 508011001/

Cheers, that's quite interesting and far more detailed than the previous info I've seen (even if it does lead to an array of 'technical' questions).

The array of technical questions is of secondary importance here.

The critical issue is that the emergency braking systems were deactivated during the development on the basis that this task could be performed by a human. The problem is with Uber for operating an developing in an inherently unsafe way not the human delegated an impossible task. typical of corporate entities they are trying to dump the responsibility on the individual for a systematic failure.


I noticed this in the article: "Twenty-five times [since 2016], other drivers rear-ended the Ubers", which seems pretty high to me, and makes me wonder if Uber had a problem with their cars emergency-braking when they probably didn't need to. Maybe that's why they decided to turn it off? Or, if they were emergency-braking appropriately, I could imagine a computer-controlled car being better at doing emergency stops than a human, so any cars behind would be more likely to rear-end them.

Bmblbzzz
Posts: 3581
Joined: 18 May 2012, 7:56pm
Location: From here to there.

Re: Uber car - software problem

Postby Bmblbzzz » 19 Nov 2019, 10:47am

I don't know if that's a high figure or not, but Google did adjust their vehicles to make them less cautious at junctions because they were getting rear-ended by human drivers not expecting them to stop. Which seems like adjusting down to your environment rather than training the AI to drive as well as it can.

Bmblbzzz
Posts: 3581
Joined: 18 May 2012, 7:56pm
Location: From here to there.

Re: Uber car - software problem

Postby Bmblbzzz » 19 Nov 2019, 10:52am

Pete Owens wrote:
Bmblbzzz wrote:If the human's tasks had been limited to safety and supervision, they would probably have spotted the woman, realized that the car wasn't reacting to her and braked as a human.

To see why that is simply not possible take a look at the stopping distance chart from the highway code:
Image

At 40mph the stopping distance for a human driver is 36m - but note the first 12m of that is thinking distance - braking only occurs in the final 24m. This means that there is no absence of reaction for the human to begin to notice until they are within 24m of the collision by which time it is already much too late.

That assumes the reaction (or absence of reaction) from the computer is emergency braking, which it might have been in this case (I can't remember atm the details of the distance at which she was spotted) but it's reasonable to expect anticipatory braking in many cases, the absence of which the human supervisor should note.

Pete Owens
Posts: 1897
Joined: 7 Jul 2008, 12:52am

Re: Uber car - software problem

Postby Pete Owens » 19 Nov 2019, 1:51pm

Well that is the bit that was disabled - thus delegated to the human.

However, if the function of the human is not the impossible emergency stop override, but to override the computers choice of speed and direction on approach to any potential hazard then the human will be driving the car not the computer.

Pete Owens
Posts: 1897
Joined: 7 Jul 2008, 12:52am

Re: Uber car - software problem

Postby Pete Owens » 19 Nov 2019, 2:11pm

xjs wrote:
Pete Owens wrote:
kwackers wrote:Cheers, that's quite interesting and far more detailed than the previous info I've seen (even if it does lead to an array of 'technical' questions).

The array of technical questions is of secondary importance here.

The critical issue is that the emergency braking systems were deactivated during the development on the basis that this task could be performed by a human. The problem is with Uber for operating an developing in an inherently unsafe way not the human delegated an impossible task. typical of corporate entities they are trying to dump the responsibility on the individual for a systematic failure.


I noticed this in the article: "Twenty-five times [since 2016], other drivers rear-ended the Ubers", which seems pretty high to me, and makes me wonder if Uber had a problem with their cars emergency-braking when they probably didn't need to.

That happens to law abiding drivers who stop at red traffic lights - not an excuse to RLJ though.

Maybe that's why they decided to turn it off?

Quite possibly that was their reason. But the very fact that they even considered doing it shows they are unfit to develop self driving vehicles.

It would be akin to a cyclist struggling up Hard Knott Pass discovering that their brakes were binding slightly. Rather than adjust them, they disconnect them completely. And then they wonder why they crash on the decent.

Bmblbzzz
Posts: 3581
Joined: 18 May 2012, 7:56pm
Location: From here to there.

Re: Uber car - software problem

Postby Bmblbzzz » 19 Nov 2019, 2:50pm

Pete Owens wrote:Well that is the bit that was disabled - thus delegated to the human.

However, if the function of the human is not the impossible emergency stop override, but to override the computers choice of speed and direction on approach to any potential hazard then the human will be driving the car not the computer.

The emergency braking is the bit that most of all shouldn't be delegated to the human, because it's where the computer has the greatest advantage, in terms of hazard detection through 360 degrees using multiple channels and in reaction time. Having disconnected it, Uber must have been counting on not needing it and/or the human being alert enough and having enough time to brake manually - or else they just didn't care, it almost amounts to the same thing.

As for direction and speed on approach to hazards, the human wouldn't need to override the computer unless something were wrong. It doesn't follow from that there's no need for the human to pay attention, just as an airline pilot still needs to monitor the process when landing on automatic. Depends how much confidence Uber have in their systems and/or luck. It seems to me they have been placing a lot of trust in a compliant environment - probably easier to achieve in the air than on the ground.

User avatar
[XAP]Bob
Posts: 17794
Joined: 26 Sep 2008, 4:12pm

Re: Uber car - software problem

Postby [XAP]Bob » 20 Nov 2019, 9:52pm

Bmblbzzz wrote:I don't know if that's a high figure or not, but Google did adjust their vehicles to make them less cautious at junctions because they were getting rear-ended by human drivers not expecting them to stop. Which seems like adjusting down to your environment rather than training the AI to drive as well as it can.

Not necessarily bad if (and only if) it is then cranked back up over the course of a few years...

With more autonomous driving it should be easy to dial things back to where they need to be
A shortcut has to be a challenge, otherwise it would just be the way. No situation is so dire that panic cannot make it worse.
There are two kinds of people in this world: those can extrapolate from incomplete data.

User avatar
[XAP]Bob
Posts: 17794
Joined: 26 Sep 2008, 4:12pm

Re: Uber car - software problem

Postby [XAP]Bob » 20 Nov 2019, 9:57pm

Pete Owens wrote:
[XAP]Bob wrote:The hud concept is actually pretty easy for people to deal with - if something is, over the course of 7 seconds, repeatedly reidentitfied ( and thefire has the box around them change colour) then that’s easy to spot...

This is getting even more absurd.

I thought you wanted the safety driver to be fully concentrating on the road ahead. Now you seem to expect them to be debugging software in real time! This is an order of magnitude more difficult (and slower) and would leave absolutely no time to pay any attention at all to the road.

First of all this is fundamentally misunderstanding how humans perceive head up displays. It is often imagined that because the display appears in the same field of view that humans are able to concentrate on both at the same time. This is simply not the case due to intentional blindness (the invisible gorilla). If you you are paying attention to the display you will simply not notice what is going on the the background. There was a classic case where huds were being developed to highlight the line of runways to help airline pilots approach airports. When this was tested in a simulator they did find that the pilots were more accurate in their approaches - until they put a jumbo jet taxiing across the runway and a significant number of the pilots failed to notice it at all.

The next issue is information overload. At 7 seconds at 40mph the victim would be 125m away and on the pavement. Even if the driver could actually see them at all at night from that distance they would be unlikely to be able to correrctly identify them. And there will be a shed load of other objects of greater or lesser significance in the field of view closer than 125m (most of them imperceptible to the human eye). It is probably more than 7s concentrated effort to validate all the objects the first frame of the display. Not that any human driver is going to attach any significance at all to a pedestrian on the pavement on the other side of the road at that distance.

Next the driver would need to understand that the changing identity of the object was in any way problematical- after all it is trying to avoid hitting any object whatever it is. (ie they would have to know in advance the specific fault in the software). It is to be expected that as a system gathers more information that the understanding of what an object is and where it is going will change. How on earth would the driver be expected to know that at each reclassification the computer would discard all previous information on the object's trajectory and start from scratch. And remember this is just one of many objects that the computer is monitoring.



No - I’m expecting the rapid colour cycling to be useful in terms of the computer telling the driver what it is seeing.
Ideally you’d have two drivers, one focussed exclusively on the road, one primarily on the computer vision.

I don’t see that a hud that just displays the simplistic “identified features” boxes would significantly detract from road attention - particularly since attention fatigue is likely to be a significant risk.
A shortcut has to be a challenge, otherwise it would just be the way. No situation is so dire that panic cannot make it worse.
There are two kinds of people in this world: those can extrapolate from incomplete data.

User avatar
gaz
Posts: 14107
Joined: 9 Mar 2007, 12:09pm
Location: Kent, lorry park of England

Re: Uber car - software problem

Postby gaz » 16 Sep 2020, 3:05pm

He's got Bette Davis knees.

Pete Owens
Posts: 1897
Joined: 7 Jul 2008, 12:52am

Re: Uber car - software problem

Postby Pete Owens » 18 Sep 2020, 6:41pm

The problem was not the lack of sensors - the crossing pedestrian was detected long before they would have been visible to a human driver. The issue was what the software did with that information. And the more serious problem was that the developers were relying on a human operator to be able to override the computer in real time. This is an impossible task since humans are much slower at processing information so would need to decide apply the brakes before they were aware that the computer had failed to.

User avatar
[XAP]Bob
Posts: 17794
Joined: 26 Sep 2008, 4:12pm

Re: Uber car - software problem

Postby [XAP]Bob » 18 Sep 2020, 7:32pm

And that Uber had actively disabled one of the built in assistance features, which would have stopped the car.
A shortcut has to be a challenge, otherwise it would just be the way. No situation is so dire that panic cannot make it worse.
There are two kinds of people in this world: those can extrapolate from incomplete data.