I'd like to point out that humans, too, are not trustworthy in high risk situations. For this we have procedures, deterministic automation and so on.
I like to think of capable LLMs as pf gifted interns. I can expect decent results if I explain well enough, but I need processes around them to make sure they are doing what they are told. In my industry thats enough to produce a noticeable productivity gain, and likely some reduction of employment as its a low margin cut throat business relying on low grade knowledge workers. I see the hype and honestly cant stand it, but its measureably impacting my industry and the world around me.
> I'd like to point out that humans, too, are not trustworthy in high risk situations. For this we have procedures, deterministic automation and so on.
Except humans can transparently explain themselves and someone can be held to account when something goes wrong. Humans have the ability to have differing opinions and approaches to solve unseen problems.
An AI however cannot explain itself transparently and just resorts to regurgitating whatever output it has been trained on and black-box AI models have no clear method of any transparent reasoning meaning that it cannot be held to account.
Any unseen problem it encounters, it falls back to fixed guardrails and just repeats a variation or re-wording on what it already has said. Especially LLMs.
> Except humans can transparently explain themselves and someone can be held to account when something goes wrong
Except humans are excellent at finding excuses to avoid explaining themselves and being held to account, or to justify some misguided belief based on whatever output they have been "trained on" in their past.
People often seem to apply standards to AI in terms of rationality and reliability which even many humans cannot achieve, using terms like "hallucination" when we've seen humans do the exact same by confidently talking about things they know nothing about. Everyone laughed at Bing insisting on a wrong date to avoid admitting it's wrong about the Avatar 2 release, when that's very typical behaviour of humans in certain situations.
I'm not trying to make LLMs seem better than they are, but parts of its weaknesses are not surprising given the training data.
I like to think of capable LLMs as pf gifted interns. I can expect decent results if I explain well enough, but I need processes around them to make sure they are doing what they are told. In my industry thats enough to produce a noticeable productivity gain, and likely some reduction of employment as its a low margin cut throat business relying on low grade knowledge workers. I see the hype and honestly cant stand it, but its measureably impacting my industry and the world around me.