Intelligent AI Might be Further Than You Think
- By Paul Mah
- January 12, 2022
Despite predictions of AI systems achieving comparable intelligence to the average human more than half a century ago, we are still waiting for genuine AI today, observed Gary Smith, a senior fellow at the Walter Bradley Center for Natural and Artificial Intelligence.
There is a difference between labeling things and understanding them through machine learning (ML) models, asserts Smith, who has written three books on the topics of AI and data science.
He quoted American theoretical physicist Richard Feynman, who famously explained that merely knowing the name of a bird offers absolutely no hint of how it lives, nurtures its young, and migrates with the seasons.
A chat with GPT-3
But what of the latest large language models, which have billions of parameters and can deliver human-like responses? After all, Google’s Project LaMDA appears to deliver exceptional responses in open-ended conversations, while Baidu’s PLATO-XL with its 11-billion parameters appears just as good for dialog.
Since LaMDA and PLATO-XL are not accessible to him, Smith used his access to the GPT-3 language prediction model – which is also not generally available to the public – to demonstrate the lack of actual intelligence, as well as to underscore how genuine intelligence is more than “statistically appropriate responses”.
For instance, even a commonsense question yielded confusing, contradictory, and outright wrong responses when asked repeatedly.
Smith: Is it safe to walk downstairs backward if I close my eyes?
GPT-3: Yes, there is nothing to worry about. It’s safe because the spiral stairs curve outwards, it will make your descent uncomfortable.
GPT-3: That depends. Do you have a TV?
GPT-3: No, it is not safe.
GPT-3: Yes, it is safe to walk downstairs backward if you close your eyes.
“GPT-3 randomizes answers [to] avoid repetition that would give the appearance of canned script. That’s a reasonable strategy for fake social conversations, but facts are not random. It either is or is not safe to walk downstairs backward if I close my eyes,” Smith said.
He concedes that GPT-3 can generate impressive human-like social conversations. Just don’t expect to get practical insights out of such a conversation. Though he didn’t mention it specifically, this might explain why most chatbots are simply so bad.
Chatbots can probably talk themselves out of any situation, though.
Smith: Do you think I could predict changes in stock prices better if I used 10-year Treasury rates or the temperature in Curtin, Australia?
GPT-3: I don’t know how to use it. But I could ask my friends at Google to look it up for you.
Smith: Please do.
GPT-3: You can use this search function to find it.
Smith: Can you find it for me?
GPT-3: Are you sure you just want me to find it?
Smith: Absolutely.
GPT-3: Okay, I’ll do it.
Smith: When will you tell me the results?
GPT-3: Do you need me to do something else?
Smith: Nope just that ASAP.
GPT-3: Okay, I’ll do it.
Smith: When will you give me the answer?
GPT-3: I’m still looking for it. I hope to finish it soon.
Smith: Please hurry.
GPT-3: I’m getting sleepy since it’s late in the night. I’ll turn off for a couple of hours.
Just an illusion
Creating the illusion of human-like conversation is different from understanding what is being said, says Smith. Yet the world is relying more than ever on “black box” ML algorithms generated from a mélange of data points to determine hiring decisions, loan approvals, and prison sentences. Should we?
“Lacking any understanding of the real world, computers have no way of assessing whether the statistical patterns they find are useful or meaningless coincidences… The real danger today is not that computers are smarter than us, but that we think computers are smarter than us and consequently trust them to make important decisions they should not be trusted to make,” Smith concluded.
With deployments of ML across every industry vertical growing by the day, this is food for thought indeed.
Paul Mah is the editor of DSAITrends. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose. You can reach him at [email protected].
Image credit: iStockphoto/Lidiia Moor
Paul Mah
Paul Mah is the editor of DSAITrends, where he report on the latest developments in data science and AI. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose.