Dennett’s main arguments and observations regard the frame problem, which on one hand represents one of the core problems for AI, but on a deeper level fundamentally asks the epistemological question of how the human mind works and what human intelligence is and it is about time that this question is actively raised, also by philosophers.
Central to the frame problem is the question of relevant information in any given situation; how is the human subject able to perform all of it’s daily non-spectacular tasks. An intelligent agent to Dennet must engage in swift information-sensitive “planning ” which has the effect of producing reliable but not foolproof expectations of the effects of its actions. The robots which Dennett describes fail to live up this criteria due to their lack of right/relevant/updated knowledge.
AI Systems then are different from human subjects, which have learned some basic things during their upbringing and have some prerequisites when born, but AI systems start at [NULL]. To solve this, one could install whatever knowledge is needed. The semantic problem of this approach asks just what information must be installed. The frame problem is however not the problem of induction or truth value. An agent could believe all that it needs to believe in an empirical matter and still be unable to represent it in the right way/to make use of it. Hence, what must be known is relative to the situation at hand. The syntactic problem regards the logic of how that information is stored. This means we run into problems both with regards to storing all those bits of information, but also how a system calls upon only the relevant bits of data in a fitting order, without the processing taking forever. A system must also ignore most of what it knows and be able to operate with merely a small relevant set of bits of knowledge, well chosen, but not chosen by exhaustive consideration. Though a self-learning system for instance may no longer require that all bits of information are programmed manually and instead can learn empirically, such systems have until now only been proven efficient within certain restricted domains, but have not been able to become general problem solvers.
The term cognitive wheel summarizes the sceptic view on any unbiological system which mimics a biological one. Though this phrase can be applied broadly, to Dennet it is poses the question at which point where in high level or levels of description below the phenomenological level there is a mapping of model features onto what is being modeled and what this has for implications with regards to the likeness/difference, copy/simulation of human intelligence. One fear is that a system which mimics human intelligence may indeed not tell us very much about human intelligence itself.
Dennett, Daniel C. (1984) “Cognitive Wheels: The Frame Problem of AI” in: C. Hookway (ed.) Minds, Machines and Evolution. Cambridge University Press.