tim-b wrote:Hi
Your argument in a nutshell seems to be that because human control doesn't make moral decisions then you oppose computer control because it doesn't make moral decisions.
No, I don't oppose it at all, and I don't think that I imply that
OK so we agree that moral judgement is irrelevent to the task of controlling a vehicle - in which case why on earth bring it up?
The only relevant criterion is how good a computer is at controlling a vehicle compared to a human.
My point is that manufacturers (humans) will find it hard to programme AI to make moral judgements in situations where a human doesn't make moral judgements.
Or maybe not.
Perhaps if you maker it clear whether or not you personally think moral judgements are a important factor in vehicle control systems then we can focus the debate better.
Essentially manufacturers will be responsible for an intelligence that makes a decision to kill
Of course; it won't it will be programmed to avoid hitting things which will save lives - not kill people.
We are talking about vehicles - not weapons designs {FFE - family-friendly edit }.
and I don't think that's acceptable to them at a corporate level
Indeed - and since computers will be very much better at not killing people than are humans then failure to install a computer control system in a vehicle will result in killing people and be unacceptable.
I doubt that humans will be capable of concentraing on a task that they aren't invested in, such as your vehicle driving by AI, and won't retain the necessary situational awareness to react in a timely manner to make decisions,
Certainly the low ability of humans to concentrate is one of the reasons why computer control will be much better drivers than humans.
and a car that drives slowly just to protect its manufacturer is neither use nor ornament and you might as well cycle
Certainly I doubt there will be a market for cars that are programmed to drive too fast for the conditions so make a habit of crashing into things. Avoiding collisions is thus n the interests of the users of cars - the people they could potentially collide with and thus the manufacturers.
We have low-level AI now for specific functions, e.g. playing chess, and the AI learnt faster than scientists predicted, but driving is a higher level function, which isn't always apparent to judge by some overtaking
The chess analogy is very relevant to the debate over computer control because it was thought to be impossible for exactly the same reasons people use to argue against controlling cars. Because being good at chess was thought to involve "intelligence" the idea of a computer doing it was a challenge to the essence of our being. Of course as soon as computers became better at it than the best human grand-masters then we realise that all we are talking about is performing a vast number of very simple calculations to evaluate all the possible permutations of a particular move over the next 25 moves. The fact that computers do it better than humans means that you now consider it as "low-level".
Compared to chess driving is trivial. And we have already automated many of the tasks that human drivers find particularly difficult such as reverse parking, controlling the fuel mixture and feathering the brakes.
The control options remaining are very simple - steer - accelerate - brake.
It's the ability to learn that is necessary for drivers to develop skills,
And one of the reasons that humans are so bad at it that we each have to learn those skills individually - and we rarely experience and thus have the opportunity to learn from the unusual situations that lead to crashes. We drive the same road every day so learn how fast we can take a corner without loosing control - until one day their is an oil patch. We drive past parked cars every day and children do leap unexpectedly into our path - so we learn to drive too fast to cope when one day they do. We drive at high speed down unlit motorways at night for year after year and learn not to expect stationary objects in our path until one day there is.
Due to our limited powers of processing and observation we have to make lots of short-cuts and our assumptions in our decision making. We can only look at one thing at a time - and move our attention about 10 times a second. So our perception of a complete panoramic view is an illusion. We simply haven't evolved the ability to observe or process the amount of data necessary to move at high speed through a complex environment. These things are not intelectually difficult - it just involves a lot of sensors and processing - something that computers are good at.
The computer has time to consider all those what ifs (what if a child emerges from behind that car or that car etc) rather than do as a human and simply assume the road will be clear, and also when something unexpected does happen the computer will notice it much sooner (look at the stopping distances in the highway code and see how much is accounted for by reaction time)
and for AI it would need to amend it's instructions to learn, which is a concern of people such as Stephen Hawking and Elon Musk (Tesla Cars). Who knows where a system that can learn and rewrite programming might go? I don't think that anyone wants to go down that route
Regards
tim-b
I think you have been reading too much science fiction.