The Chinese Room and the inextricable meaning of intelligence

Kim pretty much lays out the foundations for a system running the Chinese Room or Turing Imitation function through Realization Physicalism. This asumes that if something x has some mental property M (or is in mental state M) at time t, then x is a physical thing and x has M at t in virtue of the fact that x has at t some physical property P that realizes M in x . 

As a premise, anything that exhibits mentality must be a physical system. It also requires that every mental property be physically based. This also enables multiple realization of mental properties in different systems.

This relates well to functionalism, which defines mental states by it’s observable function, not by it’s underlying workings. Hence, a mouse trap must trap a mouse, but the term says nothing about how the mouse trap works. Desires to functionalism issue in over behavior only when combined with appropriate beliefs.

The Chinese Room is an extension of Turing’s imitation game, in which subjects who cannot see each other communicate via symbols. One of the subjects is a machine, and the question is if the other(s) can(not) distinguish the machine response from the human response. Hereby Turing means not to show that machines think like humans, but holds that this experiment shows that thinking is present, which relates well to Kim’s setup. 

The Chinese Room experiment then is concerned less with consciousness than with intentionality. The thought experiment works by the subject holding a large instruction book (or database) where inputs and outputs in an unknown language are mapped. Notes are passed on to the subject and, by matching the signs on the notes with the mappings and giving the mapped response, one is able to communicate in a foreign language, however without actually being a speaker of that language. To Searle, this means that the subject or the machine has fooled us by imitating being a speaker of that language, but there is no understanding of the language present.

One objection holds that while the subject does not understand the language, the subject is simply one part of a bigger system which does. Searle would hold that even if the subject memorized all the mappings (i.e. the system), the subject would still not know the language. A larger view of the system would include structures which one is unaware of, like the workings of inner organs which we have no knowledge of or control over, except for our overall bodily system. On that view, the subject understands the language even if it does not.

The Luminous Room holds that the human capacity and it’s limitations may just not be able to recognize that there is an actual understanding of language going on. 

The robot reply holds that a system would need to interact with the world around it. But to Searle, even if a system would be connected to all of the room’s interior, it would still merely be manipulating symbols which lack any meaning to the system.

As Dennett notes, it becomes clear that we run into alot of difficulties because we have no clear or coherent understanding of what we mean by intelligence. My issue with most arguments here, which seem to be rather functionalistic, is that they seem to talk past Searle, since they have totally different approaches to intelligence and language. In a way, we have not done our homework yet. I woud agree with Searle that the symbols have no meaning to the subject, at least not any that relate to the language. But is this not perhaps a question of defining different types of intelligences for different subjects? I.e. it may not be appropriate to try to compare computer intelligence with human intelligence. This is the argument which I find most appealing since it leaves room for what we would view as understanding a language, which is more than a mapping, but also allows for other types or modes of intelligence and may satisfy several positions if defined more clearly. This would perhaps become a problem for functionalism, since a mental state is realizable in several systems, but the systems makeup give the result a different quality.

Another complication perhaps is that when Searle talks about that the subject does not understand the language, we are left without any theory of what a language actually is and what it is to speak or know a language. I asume that it can be argued that language is a system beyond the individual, so that the subject in the room is communicating in a proper way, but that does not mean that the language is manifest in the individual. Again, we seem to talk past each other. On a further notice, the subject may correspond using the symbols correctly without knowing the language, but may have created individual meanings for them (hey this looks lile a spear, this will now be spear-sign), but is not also a native speaker’s understanding of language partially individual?

  • Kim, Jaegwon (2011) “Mind as Computer: Machine Functionalism,” in: Philosophy of Mind. Routledge, Chapter 5.
  • Kind, Amy (2020) “Machine Minds,” chapter 5 in: Philosophy of Mind. Routledge.
  • Dennett, Daniel C. (1984) “Cognitive Wheels: The Frame Problem of AI,” in: C. Hookway (ed.) Minds, Machines and Evolution. Cambridge University Press.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s