Functionalism is the philosophic paradigm on which machine functionalism rests. It is easy to see why functionalism would seem attractive to computational views and machine functionalism. Functionalism starts of with Realization Physicalism:
If something x has some mental property M (or is in mental state M) at time t, then x is a physical thing and x has M at t in virtue of the fact that x has at t some physical property P that realizes M in x at t. (Kim, p.130)
Hence, anything that exhibits mentality must be a physical system. Furthermore, every mental property is physically based; each occurrence of a mental property is due to the occurrence of a physical realizer of the mental property (Kim, p.131). This relates to the second theme of functionalism, the multiple realization of mental properties, which holds that different physical systems realize the same mental properties. The behavioristic view on mentality then, which the term functionalism implies, is that mental concepts are defined by their function, not by the realizing system in the background. As an example, an engine may be constructed using various different techniques, but all engines perform the same basic job. For functionalism, what binds multiple realizations of mental concepts together, is thus sought at a causal-functional level. Hence, the concept of pain is defined in terms of its function, which serves as a causal intermediary between typical pain inputs and typical pain outputs (Kim, p. 133). Important is also that the causal conditions that activate mental mechanisms can include other mental states and that outputs of mental mechanisms can include mental states as well (Kim, p. 134). This holistic approach to the mind hence views mental events as both causes and effects of a given mental network and forms a complex causal network which engages with input from the outer world and converts it into a fitting output (Kim, p.138).
At this point it is easy to see why functionalism lends itself easily to computational views of the mind and machine functionalism in particular. One one hand, there is the conception of a mental state occupying a certain specific causal role in a network, which, if definable or formalizable, also can be computed. On the other hand, there is the idea of multiple realization of internal states. Just as vastly different biological systems consist of the same cognitive processes, different computer systems should be able to execute the same computational program. Machine functionalists hence think of the mind as a Turing machine and what it is for something to have mentality, is for it to be a physically realized Turing machine with its mental states identified with the realizers of the internal states of the machine’s instructions (Kim, p.148).
The question if (machine) functionalism amounts to intelligence very much depends on what one means by intelligence. Different projects aim at either getting to appropriate outputs through any kind of system, while others are more concerned with creating something resembling the human mind. Even Turing noted that it is not the case that the Turing test is necessary for thinking, it merely shows that it is sufficient for thinking to be going on. Hence, if by intelligence one means that certain inputs amount to fitting outputs in specific situations, functionalism may indeed be called intelligent. After all, related forms like neural networks and weak AI seem to fare well with this approach of structuring complex causal systems.
However, our conclusion is more complex if we take human intelligence as a reference. As Kim points out, multiple realization means that two systems need to have an identical psychological setup. However, we cannot believe that a human, a machine and an octopus share the same psychology (Kim, p.152). Functionalists may answer that it is not necessary that the total psychologies coincide, but only that there is some Turing machine which covers some specific mental concept in both systems. This however leaves us with the practical problem of how to isolate for example “pain psychology” from the entire psychology. A related issue regards Hubert Dreyfus’ observation that human intelligence is necessarily bound to the human body and what distinguishes persons from machines is precisely having an involved, situated, material body (Dreyfus, p. 235-237). It is hence not the case that large chunks of information are stored and processed in our brain, being able to act in the world has rather more to do with practical skills of maneuvering on the fly without processing large amounts of data (Dreyfus, p. 260).
This also relates to Searle’s chinese room, which is an extension of the Turing imitation game, and the critique that even though a computer may be computed as to, and produces the same content as a human presumably would, the end product has no meaning to the machine; it may act as if it was talking chinese, but actually it has no idea of what language is, what Chalmers would call a zombie. Intelligence it could be argued, is not just about functions which produce some end result, but there is something that it is like to feel pain or speak a language or to drink a cold beer (Chalmer, p. 104, 295; Russell & Norvig, p. 1033). Chalmers is however no materialist and the qualia of experience is to him reducible to purely material aspects (Chalmers, p. 26).
Chalmers, David J. – The Character of Consciousness – Oxford University Press, New York 2010
Dreyfus, Hubert L. – What Computers Still Can’t Do: A Critique of Artificial Reason – The MIT Press, USA 1992
Kim, Jaegwon (2011) “Mind as Computer: Machine Functionalism” in: Philosophy of Mind. Routledge, Chapter 5.
Russell, Stuart & Norvig, Peter – Artificial Intelligence – A modern approach, third edition – Pearson Education Limited, Harlow 2016