Sentient LLMs: Must Generative AI Attain Human Consciousness?
- By David Stephen
- November 06, 2023
There is a new review of books in Nature, Consciousness: what it is, where it comes from — and whether machines can have it, where the author wrote, "I consider the indicators as signifying the potential for consciousness, rather than its existence. Would we really want to build a machine with consciousness or agency? Here, I am even less sure. We are yet to understand which creatures in the world are conscious, and have not developed ethical frameworks that account for this possibility."
Why would AI want to feel pain? When humans have medications to numb it. Why would AI want to be appetitive, take bathroom breaks, or be dizzy from sleeplessness? When humans in these states are less functional—with energy and intelligence. Why would AI chatbots want a down mood because of 'mental ties' to some culture, ideology, wellbeing or whatever else, when deprived or offended?
In the review, the author stated, "To understand where artificial intelligence might be heading, we must first understand what consciousness, the self and free will mean in ourselves."
Consciousness is described as subjective experience. What are all the ways that humans have subjective experience? Seeing color, seeing text, hearing sound, touch, reading, language and so forth.
The sense of self is described with personalization, attachment, or association, away from depersonalization, detachment, or dissociation. In what ways do humans have a sense of self? When awake, when cold, while eating and so on.
Free will is described around intentionality and control. It applies to speech, writing, typing, singing, movement, sitting, and so forth for humans.
There could be a comprehensive list of each to understand the range of consciousness, self, and free will in humans, including those that generative AI seems to have closely mimicked.
Saying AI has an ability does not imply anthropomorphizing AI. It was made to imitate human outputs, such that, where parallel, they can be ascribed to AI, like the outputs of virtual reality—or a mirror's image. These outputs have their niche. They do not also mean anthropocentrism.
Generative AI cannot taste sugar but can define sugar and state its preparation, uses, disadvantages, and so forth, albeit without a sense of self. When humans do, there is a sense of self. Some humans may define it from experiences of variety, preparations and adverse effects.
In experiencing sugar, there is one component to know what it is: its state—as a form of matter—its constituents, its class of food, and so forth. There is another component to knowing that it is the self that is having the (taste, aroma, sight, and so on) experience.
AI does not have human consciousness but possesses a (knowing) component. AI does not have free will but may answer differently every other time it is prompted. This seeming agency [or choice] may be loosely associated with [rudimentary] free will. AI has no sense of self but can answer in the 'first object' similar to the 'first person' like in humans. Though it was programmed to do so.
In productivity, what is required from the range of human consciousness is often limited to what applies to purposes, chiefly human intelligence—for cognition, analysis, memory, creativity, and so forth—using abstraction, language, and others. Humans have a sprawling stretch with thoughts (or their form) and language to learn and state. Moreover, in considering humans, it is often healthy humans who are pedigreed—not just anyone who is not fully conscious or well-versed.
The sentience quotient of LLMs may hover around the intelligence [knowing without experience] it copied from humans. This may not require any social or ethical consideration more than caring for a fragile object.
The debate over whether AI is conscious may not matter since AI has already adopted what seems like the most essential element, even though it misses a vast range of others.
In humans, all the divisions of consciousness, self and free will can be summed as 1, the most among species. Organisms that are similar to humans have lower than that total. Generative AI may have a (little) fraction from the sub-divisions of humans.
It is postulated that consciousness is an outcome of the human mind. The human mind is the collection of all the electrical and chemical impulses of nerve cells, with their features and interactions. The mind is what makes humans alike and part of what makes humans different. Capabilities of the mind include consciousness, emotions, feelings, memories, thoughts, reasoning, analysis, perceptions, cognition, sensations, modulations and so forth.
All are nature, with many modified by nurture. In the clusters of neurons in the brain, it is theorized that electrical and chemical signals interact in sets, making all mind processes closely mechanized but specialized by features. Intelligence is not consciousness but results alike––at the direction of respective electrical and chemical architecture. AI is limited but convergent. Humans, with advances in recent centuries, keep adventuring towards stronger intelligence, attenuating exposure to permanent weaknesses by biology.
The human mind is complex, though its tentacles, including intelligence, compete for prioritization. AI may not need that competition.
David Stephen does research in conceptual brain science. He was a visiting scholar in medical entomology at the University of Illinois Urbana-Champaign, UIUC, IL, United States. He did computer vision research at Rovira i Virgili University, URV, Tarragona, Spain.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends. Image credit: iStockphoto/Ole_CNX