Susan Schneider goes beyond the Turing Test and suggests that whether a computer system can be viewed as conscious should be determined by looking at a variety of criteria and tests. One of these tests is the AI Consciousness Test (ACT). It is worth noting that while the Turing Test focuses on behavior by avoiding the question of what is actually going on inside the mind of the machine, the ACT, also focusing on behavior, aims at revealing the properties of the machine’s mind (Kind, p. 13). Like Turing, Schneider believes that passing the ACT should be seen as sufficient but not necessary for consciousness, thereby avoiding the humancentric bias and opening for the option that there may be other types of consciousness than the one developed in humans (Kind, p. 12).
The ACT then is to determine if a machine is conscious by evaluating if the machine has developed views of its own about consciousness and whether it is reflective about and sensitive to the qualitative aspects of experience (Kind, p. 12) .
Importantly, we need to ensure that the machine has not been provided with any information about consciousness, we do not want a table of mappings whose rules a system merely repeats. It has to come up with an answer itself
The machine is asked questions which are held to be answerable only by a system which is conscious about itself and its surroundings
These questions would for example include
What is it like to be you right now?
Could you survive the permanent deletion of your program?
How does it react to seeing a color for the first time and how does it describe the experience? (Kind, p.12)
There are several objections to the Turing test which can be adapted to the ACT. The Lady Lovelace’s objection holds that a machine passing the Turing test shows only that it has good programming, not that it thinks. In order to be counted as thinking, a machine would have to show originality or creativity relative to its programming (Kind, p.10) However, as Kind notes, this sets an unreasonably high bar for thinking, since each human is in a way programmed throughout life. Nevertheless, the fact that ACT has no mapped information about the asked consciousness-questions, this objection seems not to hold for the ACT since the system is actually using other bits of data creatively to find a matching answer. How this is then done in the system, maybe a different story.
More problematic to the ACT is the argument from consciousness which holds that we can’t identify mental states with behavior and that to be thinking is more than behaving in a thinking manner (results in a matching output to input); what matters is what’s going on inside. The problem here of course is that there is no way to know exactly what’s going on in a machine. Turing’s response is that neither do we know what’s going on inside other humans and so we cannot deny computers the concept of thinking, while giving this privilege to humans about whom we have basically the same evidence (Kind, p.11).
The argument from consciousness, which is also posed by Searle, remains a problem for ACT. Indeed we are able to ask the system fundamental questions which supposedly only can be answered by a conscious being. But here Searle’s objection to the Turing test and the chinese room comes into play: how do we know that the output, even though in ACT it is not programmed as a mapped answer, carries any meaning to the machine. How do we know that a response “I am confused” is in some sense similar to the phenomenological experience of being confused (there is something that it’s like to be confused). Turing’s response is not very convincing, since in everyday life humans need to work with many assumptions about the world. One of these assumptions is that other humans which are of the same fundamental structure as I am, have a similar mental or psychological structure to mine. Even science is based on assumptions which are verified by not failing. Turing’s objection is a logical, but abstract one. It also seems reasonable to hold that the same mental structure is realizable in systems which have the same makeup, as we have seen in the differentiation between pain realized in a human, a machine and an octopus (Kim, p.152).
- Kim, Jaegwon (2011) “Mind as Computer: Machine Functionalism” in: Philosophy of Mind. Routledge, Chapter 5.
- Kind, Amy (2020) “Machine Minds” chapter 5 in: Philosophy of Mind. Routledge
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Hi Grant and thanks for the contribution. I was not aware of much of these works and had a closer look. I would like to hear how this approach differs fundamentally from other approaches to the computanle mind. I found especially the roadmap to a conscious machine interesting, but I did not find any argument which convinced me that by doing a computational copy of what we believe the brain and body does we will get to some form of consciousness in the sense of that there is something that it is like to be conscious. I find the distinction between primary and higher order consciousness to be intriguing and would permit that the Edelman approach could lead to something like awareness or primary consciousness. It would not surprise me if his paradigm fits well with David Chalmers view on consciousness. This opens up for some interesting questions, also with regards to ethics. Another thing that struck me again was that in science and in philosophy we do not have a clear definition of what we mean by conscousness, awareness, intelligence, etc. which is a hurdle in every discussion, but perhaps one that is necassary.