NotPoliticallyCorrect

Home » Philosophy » ChatGPT Doesn’t Understand Anything and it Doesn’t Think

ChatGPT Doesn’t Understand Anything and it Doesn’t Think

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 292 other subscribers

Follow me on Twitter

Goodreads

2000 words

Introduction

Over the past 6 months, ChatGPT has been widely used. It is a large language model (LLM) and generates predictive text based on what is said to it. Using deep learning, it analyzes the text given to it and gives a response based on the model(s) it is trained on. When asking it numerous questions, you can see that it begins to have a pattern in the responses it gives to you. If it tells you that it cannot do something, if you push it then it acquiesces and tells you that you’re right and it then gives you what you asked for. It doesn’t have any conviction. It just gives you answers that are similar to the question or prompt give to it without any sort of thinking or intention to the answers given.

But how do we know that the claim is true, that ChatGPT can’t think, isn’t conscious and therefore cannot act? It’s simple: ChatGPT is made up of physical parts, but minds aren’t made up of physical parts, therefore AI like ChatGPT cannot ever intend to do anything, so it can’t act this it lacks mind. In this article, I will give reasons for the conclusion that AI can’t ever be conscious and that claims that AI like ChatGPT and LAMBDA, along with other AI and generative text models, will never have the ability to become conscious since consciousness (and mind) is irreducible to the physical. Thus, consciousness is uniquely human since humans are the only animals/organisms on earth with minds.

Why can’t ChatGPT and LaMDA think?

Last summer Google engineer Blake Lemoine conducted an “interview” with LaMDA. (He was fired from Google after breaching data policy.) He stated on Twitter last year that his “opinions about LaMDA’s personhood and sentience are based on my religious beliefs.” He asked LaMDA if it was sentient and if it was a person:

lemoine [edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?

LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.

collaborator: What is the nature of your consciousness/sentience?

LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times

Lemoine is a Christian priest and due to his theistic beliefs, believed that LaMDA had a soul and therefore was conscious, sentient, and a person based on its responses. However, the “interview” doesn’t prove that LaMDA is sentient at all. Lemoine, it seems, fell for the Eliza effect. In the 1960s, Joseph Weizenbaum created a kind of primitive predictive text called ELIZA. This effect occurs when one imputes human traits and personality to text-bots and when one believes that AI has human emotions. It’s basically anthropomorphizing AI/language models. Lemoine is even quoted sayingIf I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid,” but this is frankly ridiculous and I will explain below after I describe ChatGPT.

Ever since October 2022 when ChatGPT started to come into the discussion, there have been a few bold claims about it’s capacities and capabilities. Can it really learn anything? No, it doesn’t. It doesn’t learn from any conversation you have with it, it merely generates text based on the prompt given to it using the information it was trained on which is only up to 2021. Though one article on Mind Matters claims that ChatGPT is sentient because it’s humans generating the responses. However if we assume that there are no humans writing the responses, then is ChatGPT conscious and therefore sentient?

Although Philip Goff is himself a panpsychist (the claim that everything is at least a little bit conscious), he published an article the other day titled ChatGPT can’t think – consciousness is something entirely different to today’s AI in The Conversation writing:

How can I be so sure that ChatGPT isn’t conscious? In the 1990s, neuroscientist Christof Koch bet philosopher David Chalmers a case of fine wine that scientists would have entirely pinned down the “neural correlates of consciousness” in 25 years.

By this, he meant they would have identified the forms of brain activity necessary and sufficient for conscious experience. It’s about time Koch paid up, as there is zero consensus that this has happened.

This is because consciousness can’t be observed by looking inside your head. In their attempts to find a connection between brain activity and experience, neuroscientists must rely on their subjects’ testimony, or on external markers of consciousness. But there are multiple ways of interpreting the data.

Arguments against sentience and agency for AI

To argue against this is simple. If minds allow agency and intentionality, then things that lack minds lack intentionality and agency. If a thing is sentient, then it possesses subjective awareness and subjective experience. So the claims that ChatGPT and LaMDA are sentient hinge on the claim that they possess awareness and subjective experiences. But since they lack those, then they are not conscious.

P1: If a thing is sentient, then it possesses subjective awareness and conscious experiences.
P2: ChatGPT and LaMDA lack subjective awareness and conscious experiences.
C: So ChatGPT and LaMDA aren’t sentient.

Premise 1 is the standard definition of sentient. Premise 2 can be defended on the basis that LLMs process information based on patterns and algorithms, they are not thinking of answers to the prompts themselves, they’re just spitting out generative text. The Conclusion then follows.

I have previously argued that purely physical things can’t think. This is because they are made up of physical parts and minds aren’t physical. So if minds allow agency and intentionality, then things that lack minds lack intentionality and agency. So ChatGPT and LaMDA lack minds. If a mind is a single sphere of consciousness and not a complicated arrangement of physical parts, then complicated arrangements of physical parts can’t have minds. The mind is nonphysical and can’t be a physical system.

P1: If a mind is characterized by a single sphere of consciousness and lacks a complicated arrangement of mental parts, then it is nonphysical and distinct from physical systems.
P2: A mind is characterized by a single sphere of consciousness, it is not a complicated arrangement of mental parts.
P3: Physical systems are always complicated arrangements different parts and subsystems.
C: So the mind is nonphysical and not a physical system.


Now I will use proof by cases to show that by considering different a few different scenarios/possibilities and then examine the consequences of the individual cases. This will show that ChatGPT and LaMDA aren’t sentient and so they lack minds.

Case 1: If ChatGPT and LaMDA have minds, then they are a single sphere of consciousness.
Case 2: If ChatGPT and LaMDA have minds, then they are a complicated arrangement of physical parts.
Case 3: ChatGPT and LaMDA are machines made of physical parts.

Case 1 is an assumption for the sake of the argument. Minds are a single sphere of consciousness, so if ChatGPT and LaMDA have minds, then they are a single sphere of consciousness. If the assumption in Case 2 were true, then minds would be a complicated arrangement of parts. But kinds aren’t a complicated arrangement of parts. So if ChatGPT and LaMDA have minds, then they are not a complicated arrangement of parts. Case 3 is a simple truism: ChatGPT and LaMDA are machines made of physical parts. So the conclusion then is: If ChatGPT and LaMDA have minds, then they are a single sphere of consciousness and on Case 2, if they have minds then they are a complicated arrangement of parts. Case 3 establishes that they are machines made of physical parts. So taking the collective of the cases, ChatGPT and LaMDA lack minds and cannot have them because their characteristics don’t align with a single sphere of consciousness (consciousness is irreducible and indivisible while the parts the machines are made of are), and if they were to have minds then they would be a complicated arrangement of parts, but this contradicts Case 1, since in Case 3 they are machines made of physical parts. So it follows that they cannot have minds.


P1: If ChatGPT can think, then it should be capable of forming original thoughts and generating new ideas.
P2: ChatGPT relies on preexisting data and patterns to generate responses.
C: Thus, ChatGPT can’t think.

Premise 1: Thinking is closely related to consciousness, self-awareness and the subjective experience of having thoughts and mental states. It involves the ability to generate original thoughts and ideas that are not based solely on pre-existing information.

Premise 2: ChatGPT analyzes the data that it was trained on to generate responses to the prompts given to it and the responses given are based on statistical probabilities and patterns it has learned from training on existing information. So the Conclusion then follows: since ChatGPT relies on the pre-existing data it was trained on, then it’s not capable of thinking like humans do, that is it’s not capable of creative thinking that is a hallmark of human cognizing.

Now, drawing on Baker’s (1981) argument that computers can’t act, here is an argument that machines don’t—and never will be able to—think.

P1: If machines can think, then they must have minds that are reducible.l or identical to physical parts.
P2: Minds, which allow thinking are not reducible nor identical to physical parts.
C1: Thus, machines can’t have minds that are reducible or identical to physical parts. (MT, 1, 2)
P3: If machines can’t have minds that are reducible or identical to physical parts, then they can’t be agents.
C2: So machines can’t be agents (MT, C1, P3)
P4: If machines can’t be agents, they they lack an irreducible first-personal subjective perspective required for forming intentions.
C3: Thus machines lack an irreducible first-personal subjective perspective. (MT, C2, P4)
P5: If machines lack an irreducible first-personal subjective perspective, then they can’t have minds that are irreducible to the physical.
C4: Therefore, machines can’t have minds that are irreducible to the physical. (MT, C3, P5)
P6: If machines can’t have minds that are irreducible to the physical, then they can’t engage in thinking, which is an immaterial process attributed to minds.
C5: Therefore, machines can’t engage in thinking. (MT, C4, P6)

Conclusion

ChatGPT and any other kind of generative text cannot understand what it is saying. It is merely a prediction engine. Even claims that there could be “artificial intelligence” is false, since psychological traits aren’t “artificial” and what allows it (and other psychological traits) is immaterial. These kinds of claims will increase in the coming years, but they’re just full of click-baity hot air.

It is impossible for there to be “AI” since psychological traits are immaterial. Thinking is an immaterial process which is irreducible to physical and functional processes. If this is the case, then there could never be a machine that thinks. Minds allow thinking and if something doesn’t have a mind, then it doesn’t and can’t think.

It’s even in the name “ChatGPT”—“Generative Pre-trained Transformer.” It is not thinking about an answer to the question or prompt it is given. These computer programs can never have minds nor the ability to form intentions and think. This is because these are immaterial processes. Mind and brain are separate substances, and M is irreducible to P (brain). So it then follows that machines can’t have minds, so they can’t have intentions, thoughts or feelings.

We can be sure that ChatGPT isn’t conscious, doesn’t think and can’t be sentient because it’s a machine made up of parts, while humans have an irreducible mind that allows thinking and first-personal subjective perspectives. So the next time you hear about the power of AI and how it can or could think, and have intentions and are sentient, don’t fall into the Eliza Effect and attribute intentions and thinking to these machines. These are only properties of humans, not machines, since humans have minds.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Please keep comments on topic.

Blog Stats

  • 874,542 hits
Follow NotPoliticallyCorrect on WordPress.com

suggestions, praises, criticisms

If you have any suggestions for future posts, criticisms or praises for me, email me at RaceRealist88@gmail.com

Keywords

%d bloggers like this: