Stevek76 wrote: ↑22 Jan 2024, 1:36pm
Haha, when the underlying model is trained on a large pool of data which will inevitably include rants off reddit or something about delivery firms including dpd!
Huge amount of the risks regarding LLMs and the like are a standard garbage in garbage out problem just like any other automation before it.
Cugel wrote: ↑21 Jan 2024, 2:33pm
why do you think such gestures have any utility at all in this issue?
Well it's not like other regulatory standards, e.g. on H&S or data protection haven't had utility, this is much the same. Much like data protection it's not going to stop people operating outside the regulatory environment but it can provide a regulated market which major 'reputable' companies will be forced to operate within.
"Reputable companies"? Is there such a thing? That's surely an oxymoron - unless you mean, "Reputed to be a gang of greedy hooligans who'll destroy, risk and damage anything to make 3 extra groats on the bottom line".
NB Many already employ AI, which makes decisions for them about which neither the CEO nor anyone else has the faintest idea of the process and parameters involved.
PS When various interactive processes with various actions are combined, permed or otherwise allowed to form a larger whole, the resulting processes and actions of that whole inevitably contain rather more in the way of decisions and consequent actions than the mere sum of the participating parts would add up to. The emergent whole with its processes & actions are generally unpredicted and unpredictable.
“Practical men who believe themselves to be quite exempt from any intellectual influence are usually the slaves of some defunct economist”.
John Maynard Keynes