Teslas can be programmed to break the law.

Bmblbzzz
Posts: 6311
Joined: 18 May 2012, 7:56pm
Location: From here to there.

Re: Teslas can be programmed to break the law.

Post by Bmblbzzz »

Jdsk wrote: 22 Jan 2022, 9:51am
Bmblbzzz wrote: 13 Jan 2022, 5:30pm
Jdsk wrote: 13 Jan 2022, 12:44pm
They might. But a lot of the AI biasses that have been analysed come from underrepresentation of minorities in the training group rather than prejudice inherited from the designers.
Isn't that what Mike was saying? And it's the same with the cars. The cars aren't being taught, "do this", they're learning from observing the habits of existing human drivers. (One effect of this will be to spread American driving habits to other countries.)
Underrepresentation in the training set is a different problem from importing bias from human behaviour. (And of course there's a risk of the latter, see next post.)

And we know that underrepresentation can occur. It's been observed in AI with facial recognition and with forensic use of DNA analysis. (And it also commonly occurs in clinical trials for both the majority sex and ethnic minorities.)

Jonathan
Underrepresentation is often due to human bias. Occasionally deliberate prejudice, but more often careless assumptions.
Jdsk
Posts: 24864
Joined: 5 Mar 2019, 5:42pm

Re: Teslas can be programmed to break the law.

Post by Jdsk »

Bmblbzzz wrote: 22 Jan 2022, 3:36pm
Jdsk wrote: 22 Jan 2022, 9:51am
Bmblbzzz wrote: 13 Jan 2022, 5:30pm Isn't that what Mike was saying? And it's the same with the cars. The cars aren't being taught, "do this", they're learning from observing the habits of existing human drivers. (One effect of this will be to spread American driving habits to other countries.)
Underrepresentation in the training set is a different problem from importing bias from human behaviour. (And of course there's a risk of the latter, see next post.)

And we know that underrepresentation can occur. It's been observed in AI with facial recognition and with forensic use of DNA analysis. (And it also commonly occurs in clinical trials for both the majority sex and ethnic minorities.)
Underrepresentation is often due to human bias. Occasionally deliberate prejudice, but more often careless assumptions.
Even if that occurs in recruitment of the training set it's a different problem from bias in the AI processes themselves.

And it can be genuinely difficult to collect enough data from small minorities. (As it often is for the majority sex in clinical trials.)

Jonathan
Post Reply