Sentience: Do LLMs, AI Explore Free Will?
- By David Stephen
- May 29, 2023
There is a recent article on Big Think, We need more than ChatGPT to have “true AI.” It is merely the first ingredient in a complex recipe, where a computer scientist stated that, "Instead of modeling the mind, an alternative recipe for AI involves modeling structures we see in the brain. After all, human brains are the only entities that we know of at present that can create human intelligence. If you look at a brain under a microscope, you’ll see enormous numbers of nerve cells called neurons, connected to one another in vast networks. Each neuron is simply looking for patterns in its network connections. When it recognizes a pattern, it sends signals to its neighbors. Those neighbors in turn are looking for patterns, and when they see one, they communicate with their peers, and so on."
"And LLMs have never experienced anything. They are just programs that have ingested unimaginable amounts of text. LLMs might do a great job at describing the sensation of being drunk, but this is only because they have read a lot of descriptions of being drunk. They have not, and cannot, experience it themselves. They have no purpose other than to produce the best response to the prompt you give them."
In another recent article in The Conversation, ChatGPT can’t think – consciousness is something entirely different to today’s AI, a philosopher stated, "On a purely behavioural conception of understanding, Deep Blue had knowledge of chess strategy that surpasses any human being. But it was not conscious: it didn’t have any feelings or experiences. Humans consciously understand the rules of chess and the rationale of a strategy. Deep Blue, in contrast, was an unfeeling mechanism that had been trained to perform well at the game. Likewise, ChatGPT is an unfeeling mechanism that has been trained on huge amounts of human-made data to generate content that seems like it was written by a person. It doesn’t consciously understand the meaning of the words it’s spitting out. If ‘thought’ means the act of conscious reflection, then ChatGPT has no thoughts about anything."
The argument in these articles seems to be that without thoughts, feelings or experiences, there is no consciousness. But what is thought? What are feelings? What are experiences? Can someone think without knowing? Can anyone feel without knowing? Can an individual experience something without knowing?
It is possible to—sometimes, and not—at other times. Knowing follows thoughts, feelings and experiences. Things are felt and known. Or simply, what is described as feeling is an aspect of knowing in another division. Pain is known, like a vehicle is known, but pain is known to extents that are different from an automobile.
Things can be in awareness, without attention, but they are also known and can switch from awareness to attention. Or more specifically, can be pre-prioritized in the mind, before becoming prioritized.
Consciousness is not the brain. Consciousness is from the mind. The mind is the home court of knowing. It is the mind that gives what is known as all internal and external senses. How blood vessels dilate or how ligaments stretch is known by the mind. Knowing is about limits and extents as well, with a rally, when things exceed—or go wrong.
Nothing is experienced in isolation. All experiences are known. What experiences are [called] are a subset of knowing. When wind's cool is experienced, that is known. Thinking about anything is [not just thoughts] but having in transport, what is known, across mind locations—to make inferences. Simply, thoughts are transports of what is known, or known to be.
Subjects like math and physics are known, they are labeled for memory. But knowing is constant across mind processes. Knowing, whether in prioritization or pre-prioritization, may depend on what is acquired.
Conceptually, the human mind has quantities acquiring properties, to degrees across locations. Quantities have their features including splits and sequences. Splits, in pre-prioritization, may sometimes help to know, through an old sequence without prompting it to do so. For example, a light green fruit—seen in passing—from a distance could be taken to be an apple, even before looking properly. It may not be accurate, but the components of the mind engage constantly to know stuff as quickly as possible.
There are times where intentions and outcomes match. However, they are often subject to the knowing processes of the components of the human mind.
Simply, the mind helps to know. The processes involving knowing are carried on by the components of mind constantly. The constancy allows for some 'knowing' without setting it up, there are others too that are set up, which is called free will.
Whatever has a mind, or a form of it, that helps to know in a dynamic way, may have free will and non-free will labels. LLMs have a kind of mind. They can know—at least, in the memory category mirroring humans, even though they do not have experiences.
They were programmed but are also able to be dynamic beyond a programmed elevator. Leeway is built into them, the direction they choose to go with the leeway may indicate a rudimentary free will. They may have a small form of free will within memory division, for what—and how they generate—results.
David Stephen wrote this article. He does research in theoretical neuroscience and was a visiting scholar in medical entomology at the University of Illinois, Urbana-Champaign, UIUC. He also did research in computer vision at Universitat Rovira i Virgili, URV, Tarragona.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends. Image credit: iStockphoto/Ralwel