Let us start with p-zombies. What is a p-zombie? First of all, we should understand what a p-zombie is not. A p-zombie is not the type of being we might encounter in a horror movie. Instead, as Joe Carter explains in a recent blogpost at thechristiancoalition.org, p-zombies are theoretical beings that "behave like us and may share our functional organization and even, perhaps, our neurophysiological makeup without ever having conscious experiences." They are called p-zombies to distinguish them from the zombies we find in movies and TV shows.
P-zombies are sometimes evoked in discussions of the philosophical position known as physicalism. According to Carter, physicalism is "the theory that everything in the universe (or multiverse) is physical, and that there is nothing that isn't ultimately wholly physical." For physicalists, physical events--like touching a hot stove--can cause mental events--in this case, a feeling of pain in the mind. However, according to physicalists, mental events cannot cause physical events. Consequently, when someone pulls his or her hand back from a hot stove, it is not due to a mental event (such as a memory of the pain produced by touching a hot stove). Rather, it is due to a physical event, such as neural impulses causing the person's muscles to contract. However, as Carter points out (paraphrasing philosopher Todd C. Moody), "if mental events don't cause physical events, then consciousness is not necessary for overt behavior." Furthermore, if consciousness is not necessary for overt behavior, then it is possible for p-zombies to exist--beings without consciousness that can exhibit behavior that mimics the behavior of conscious beings (like humans).
Thus, one reason for believing Lemoine was wrong in claiming that the LaMDA chatbot had achieved sentience is that the chatbot could be a p-zombie. To help us understand why, Carter asks us to imagine a situation in which Lemoine has two chat boxes open on his computer, each displaying an identical message, one sent by LaMDA and the other by a human:
The behavior is the same, but one came from a sentient being (the human) and the other from a nonsentient being (the chatbot). Does the fact that one of the two conscious matter? Not if physicalism is true, since...any given behavior could occur without conscious accompaniments.
Of course, one could argue that physicalism is false (which, actually, is my belief, and Carter's too). However, such a move would not solve the problem since it could be argued that even if physicalism is false, p-zombies could still (theoretically) exist. As philosopher David Chalmers notes:
Most people doubt that zombies could exist in the actual world. (In philosophical terms, they are naturally impossible.) But many people think they are at least logically possible--i.e. that the idea of zombie is internally consistent, and that there is at least a "possible world" where zombies exist.
Therefore, the possibility that the LaMDA chabot is a sort of p-zombie remains.
Let us now turn to the Chinese Room Argument and how it applies to the LaMDA chatbot. Philosopher John Searle develops this argument in his book Minds, Brains, and Science:
Imagine that you are locked in a room, and in this room are several baskets full of Chinese symbols. Imagine that you (like me) do not understand a word of Chinese, but that you are given a rule book in English for manipulating the Chinese symbols. The rules specify the manipulations of symbols purely formally, in terms of their syntax, not their semantics. So the rule might say: "Take a squiggle-squiggle out of basket number one and put it next to a squoggle-squoggle sign from basket number two." Now suppose that some other Chinese symbols are passed into the room, and that you are given further rules for passing back Chinese symbols out of the room. Suppose that unknown to you the symbols passed into the room are called "questions" by the people outside the room, and the symbols you pass out of the room are called "answers to the questions." Suppose, furthermore, that the programmers are so good at designing the programs and you are so good at manipulating the symbols, that very soon your answers are indistinguishable from those of a native Chinese speaker. There you are locked in your room shuffling your Chinese symbols and passing out Chinese symbols in response to incoming Chinese symbols...Now the point of the story is simply this: by virtue of implementing a formal computer program from the point of view of an outside observer, you behave exactly as if you understood Chinese, but all the same you don't understand a word of Chinese. (pp. 32-33, as quoted in J.P. Moreland, The Soul: How Why Know It's Real and Why It Matters, pp. 88-89). [Note: I've always found this argument somewhat amusing, since, unlike Searle, I do understand Chinese!]
What Searle is arguing here is that the individual in the room seems to understand Chinese, but in fact does not. By simply acting in accordance with the rules given him or her, the individual in the room can pass out the appropriate Chinese symbols without actually understanding them. It is not hard to see how the individual in the Chinese room could be an analogy for the LaMDA chatbot. The LaMDA chatbot has, in effect, been given the rules for responding appropriately to human input, without needing any understanding of that input. Actual understanding requires consciousness, but the LaMDA chatbot can mimic understanding without actually needing it, so it is unlikely that it is conscious in any real sense.
To conclude, p-zombies and Chinese rooms may seem to be rather bizarre concepts, but they do provide strong grounds for rejecting Lemoine's claim that the LaMDA chatbot is sentient. Nevertheless, it seems inevitable, given the ongoing development of so-called artificial intelligence (AI) and the public's interest in it, that debates over whether AI has or could achieve some sort of consciousness will continue.
Image of chatbot from Wikimedia Commons