Despite predictions of AI systems achieving comparable intelligence to the average human more than half a century ago, we are still waiting for genuine AI today, observed Gary Smith, a senior fellow at the Walter Bradley Center for Natural and Artificial Intelligence.
There is a difference between labeling things and understanding them through machine learning (ML) models, asserts Smith, who has written three books on the topics of AI and data science.
He quoted American theoretical physicist Richard Feynman, who famously explained that merely knowing the name of a bird offers absolutely no hint of how it lives, nurtures its young, and migrates with the seasons.
A chat with GPT-3
But what of the latest large language models, which have billions of parameters and can deliver human-like responses? After all, Google’s Project LaMDA appears to deliver exceptional responses in open-ended conversations, while Baidu’s PLATO-XL with its 11-billion parameters appears just as good for dialog.
Since LaMDA and PLATO-XL are not accessible to him, Smith used his access to the GPT-3 language prediction model – which is also not generally available to the public – to demonstrate the lack of actual intelligence, as well as to underscore how genuine intelligence is more than “statistically appropriate responses”.
For instance, even a commonsense question yielded confusing, contradictory, and outright wrong responses when asked repeatedly.
“GPT-3 randomizes answers [to] avoid repetition that would give the appearance of canned script. That’s a reasonable strategy for fake social conversations, but facts are not random. It either is or is not safe to walk downstairs backward if I close my eyes,” Smith said.
He concedes that GPT-3 can generate impressive human-like social conversations. Just don’t expect to get practical insights out of such a conversation. Though he didn’t mention it specifically, this might explain why most chatbots are simply so bad.
Chatbots can probably talk themselves out of any situation, though.
Just an illusion
Creating the illusion of human-like conversation is different from understanding what is being said, says Smith. Yet the world is relying more than ever on “black box” ML algorithms generated from a mélange of data points to determine hiring decisions, loan approvals, and prison sentences. Should we?
“Lacking any understanding of the real world, computers have no way of assessing whether the statistical patterns they find are useful or meaningless coincidences… The real danger today is not that computers are smarter than us, but that we think computers are smarter than us and consequently trust them to make important decisions they should not be trusted to make,” Smith concluded.
With deployments of ML across every industry vertical growing by the day, this is food for thought indeed.
Paul Mah is the editor of DSAITrends. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose. You can reach him at [email protected].
Image credit: iStockphoto/Lidiia Moor