Why Calling Algorithms AI Is Oxymoron
- By Yatish Rajawat
- May 16, 2023
It's easy to assume intelligence in a bot if it spews words that translate as a sentence or a paragraph. However, the sum of the words may not mean anything.
We may read ChatGPT3's output as a sign of intelligence, but it is not. To assume intelligence in a computer is wrong, as the term “Artificial Intelligence” itself is an oxymoron. Intelligence cannot be artificial; stupidity, however, can be natural.
Why our AI approach needs a rethink
Intelligence can never be artificial; we can only associate intelligence with sentient beings. By using the word "artificial" in front of "intelligence" we acknowledge a processing algorithm as a sentient being.
When we give life-like qualities to anything, our approach toward it changes. And this leads us to the most significant failure to control any technology's spread and influence.
Most technologists have already yielded to AI as being inevitable, its advance unstoppable, and that the human race can do little about it.
This defeatist submission can be explained by AI’s rapid “growth” — again, that's a term used for organic beings. If AI was considered a bug, virus, or bacteria, would we have found a vaccine, drug, or newspaper to squash it? After all, a virus isn't considered a sentient or intelligent being.
We expect algorithms to be able to distinguish between right and wrong. We believe that anyone intelligent will understand morality, ethics, and the common good. But we're asking too much of an algorithm used to churn data into insights.
Just because it can string together a sequence of words doesn't mean it has reached a level of understanding where it would be able to determine the ethics of an action that affects human progress or jobs.
Appearing sentient vs. being one
ChatGPT4 is a processing model of large language models (LLMs). It returns higher-quality answers than other systems because all other things being similar or equal, it is based on an algorithm that processes a humongous 45TB of data.
The algorithm goes back into its database and arranges all the existing words. It has a record of how a particular word coincides with zillions of other words it has processed; every time you enter a question, it comes back with a different result because it is connecting, organizing, and presenting billions of entries that have been processed, organized, and presented in its system. The system may appear sentient or knowledgeable, but it is still far from it.
This arrangement of words that you decide are the answers to your prompt, or question is believable enough. It is not supposed to be entirely accurate or correct, but it can pass memory exams that are based on repeating learned information as it is.
ChatGPT currently does not do so well if asked to suggest a solution to a case study in a management course or justify a legal argument, especially if past precedence does not exist for the case study or argument being made. Human intervention is required to select what is correct and what is not.
This arrangement of words that you decide are the answers to your prompt, or question is believable enough. There is no guarantee that it is fully accurate or that everything it has repeated in words and sentences is entirely correct. Therefore, there is still the need for human intervention or a choice to select what is correct and what is not.
It can pass memory exams based on repeating learned information as it is. But it currently does not do so well if asked to suggest a solution to a case study in a management course or to justify a legal argument, especially if the management case or the legal case has no past precedence.
Operating in an irrational world
The consensus is that AI will improve with deep learning algorithms. Moreover, like a search engine that gets better with every click the user makes as it learns to rate that link more highly, AI engines will also learn with usage and data.
However, there are concerns that AI could make many jobs redundant, create implicit bias in its algorithms or make decisions against humanity's interest.
An AI engine may be smart enough to detect a tiny blob on an X-ray screen as cancerous. But if that cancerous tissue is of an 85-year-old man with other ailments and it is asked for its decision to treat or leave, it may take a rational decision not to treat versus the humane one of treating and protecting. This challenge is known as algorithmic bias, where AI operates like a black box and cannot predict how it arrived at a particular answer.
While the AI processing this globby mess of a sentence is often described in human-sounding words like "neuro language programming", no neurochemistry is involved, like what happens in a specific part of the brain responsible for decision-making.
Brain surgeons are still unable to define exactly where the mind resides in the human brain, defining how different parts interact with each other and their impact on behavior. How much serotonin is affecting which behavior? Are people more depressed or violent?
Are we ready to take out emotions?
AI programmers may not know precisely how their creations process data and how that processing might affect an AI’s conclusions. Is there a human version of dopamine or serotonin in an AI engine that will change the outcome of its results? Is there an AI serotonin counterpart in the computer capable of changing outcomes?
The AI glob is opaque, and its usage of data generated by people is also opaque. Human beings are largely unaware of how AI functions and the way that it uses data generated by humans to manipulate them at several levels.
For example, online gaming companies use AI to keep people playing games by manipulating their brains with dopamine. Social media companies use AI to soak our brains in dopamine so that we cannot think clearly.
In the near future, AI will be capable of producing feature-length films by reading thousands or millions of scripts and regurgitating them.
Popular films like "Star Trek" have perpetuated the idea of a Spock-like personality: one that is purely logical, with no emotions involved.
Neuroscientists like Anthony Damasio have written in detail about the connection between reason and feeling in his book Descartes Error: Emotion, Reason and the Human Brain. "Human reason probably did not develop, in either evolution or any single individual, without the guiding force of mechanisms of biological regulation — of which emotion and feeling are notable expressions," writes Damasio.
These elements are missing in the AI processing systems, which is not a case of missing data or data bias. The missing elements in AI processing systems are emotions and feelings. Humans have these two things, but computers do not. This is why you cannot teach ethics to an AI engine; it has no emotions and cannot understand morality or ethics.
People who think AI can be taught ethics smoke a hallucinatory weed. Nobody told the AI engines to manipulate human behavior on social media or online games; they just learned to do it and are doing it without any control.
How did it learn this? How did we end up being zombies controlled by an AI engine? The objective given to the algorithm was to keep users glued to screens, so it didn't matter how or what it did. There was no sentience behind it. AI does not feel the pain of addicted individuals trapped in these infinite online games.
Feelings, emotions and social considerations guide the human rationale. A lot of our behavior is not rational; we act in ways that aren't logical because they aren't in keeping with social norms or practices.
Nobel Prize-winning economist John Aumann once explained to me why migrants are more likely to commit crimes, not just because of economic reasons but more because they are not part of society and hence feel their social peers are not judging them for their actions. So they can commit heinous crimes and not feel remorse.
Who permitted them to use our data?
An AI algorithm might be able to make decisions on its own in certain situations, but it shouldn't be considered intelligent until social mores temper it.
Moreover, if humans are considered naturally stupid or less intelligent than an algorithm because they cannot process terabytes of data, then so be it. We should continue to ask stupid questions to the AI companies. We should not accept that a company can use our data to feed an AI engine without our consent — that would, indeed, be extremely stupid.
Yatish Rajawat is the founder of Centre Innovation in Public Policy, a think tank based in Delhi. His area of research includes everything digital affecting policy, people, and the biosphere. Feedback or contact at [email protected].
Image credit: iStockphoto/Jorm Sangsorn