David Chalmers and the argument for Singularity

Charlmers’ main argument states that it is probable that an explosion of ever-greater levels of AI which will themselves create greater AI (singularity) will at some point create a super-intelligence vastly more powerful than the human mind. The intelligence explosion is often combined with the speed explosion, which holds that machine power/speed increases ever more dynamically. Though they relate, Chalmers however believes that the intelligence explosion has more to do with software, than with the hardware performing it. 

His argument for singularity runs as follows: 

  • From AI (artificial intelligence of human level or greater) there will develop AI+ (artificial intelligence of greater than human level)
  • If there is AI+, AI++ (superintelligence, AI of far greater than human level) will follow

As Chalmers notes, the argument depends on the assumption that there is such a thing as intelligence and that it can be compared between systems. This is fundamental, since as we will see, the intelligence which Chalmers addresses is purely functional; it is supposed to be intelligent in the sense that it is able to solve practical problems in medicine, science, etc., on a higher level than humans:

[…] we care about the singularity because we care about potential explosions in various specific capacities: the capacity to do science, to do philosophy, to create weapons, to take over the world, to bring about world peace, to be happy […]

His premises, stepping over restrictions in time and whatever in the world may put a stop to the development, run as follows:

Premise 1: There will be AI
An argument supported by biology is that simulation of internal processes is enough to replicate approximate patterns of behavior. This rests on the assumption that (i) the human brain is a machine, (ii) we will have the capacity to emulate this machine, (iii) if we emulate this machine, there will be AI. Counterarguments argue that the brain is more than a machine, that we cannot emulate it and that emulating it need not produce AI. As Chalmers however notes (i) nothing in the singularity idea requires that an AI be a computational system (ii) as noted, it’s intelligence is measured wholly in terms of behavior and behavioral dispositions, whether or not these systems are truly conscious or intelligent does not matter for the bare existence of a singularity. 

The most important counterargument to Chalmers states that the brain is not a mechanical system at all, or that nonmechanical processes play a significant role and cannot be emulated. Chalmers however notes that even if there are such nonphysical processes they may nevertheless be emulated or artificially created. Also, the weight of evidence to date suggests that the brain is mechanical. However, we do not know much about the brain and what elements such as consciousness are, so Chalmers is not able to fully answer this objection, rather this is something which will have to be explored in the future.

Premise 2: If there is AI, then there will be AI+ 

This premise addresses the increasing dynamic with regards to technical evolution, so that AI would feasibly soon lead to AI+. As Chalmers notes, whenever we come up with a computational product, that product is soon afterwards obsolete due to technological advances. But not all methods for creating human-level intelligence are extendible in this way: For example, it is not the case that if we simply emulate brains better, then we will produce more intelligent systems. It may nevertheless be that for example brain emulation speeds up the path to AI+ in indirect ways.

Premise 3If there is AI+, there will be AI++

This premise follows from the development of AI to AI+ and that by that same dynamic which increases this jump intelligence, AI++ will be created. To counter critiques on that the difference depends on the type of intelligence measured, Chalmers returns to his point that intelligence measurement must accord sufficiently well with intuitive intelligence measures that the conclusion captures the intuitive claim that there will be AI of far greater than human intelligence. This hence echoes is functionalist approach. Other arguments once again reflect the idea that we are talking about a specific machine intelligence which needs to be evaluated in its own right. 

There are also some structural obstacles to the singularity:

  • Limits in intelligence space: we are at or near an upper limit in intelligence space. Chalmers however holds that while the laws of physics and the principles of computation may impose limits on the sort of intelligence that is possible in our world, there is little reason to think that human cognition is close to approaching those limits
  • Failure of takeoff: although there are higher points in intelligence space, human intelligence is not at a takeoff point where we can create systems more intelligent than ourselves. Chalmers answers that we are at a takeoff point for various capacities such as the ability to program. There is prima facie reason to think that we have the capacity to emulate physical systems such as brains. And there is prima facie reason to think that we have the capacity to improve on those systems. However this seems once again to be very tightly connected with the type of machine intelligence Chalmers is concerned with, not explicit human intelligence
  • Diminishing returns: although we can create systems more intelligent than ourselves, increases in intelligence diminish from there. This is however more of a speculation and some major breakthrough may change the diminishing trend.

Correlation obstacles include that though machines may evolve dramatically, this does not correlate with any or many capacities that are of interest to humans. It remains an open question, but Chalmers suspects that there will be enough correlating capacities to ensure that if there is an explosion, it will be an interesting one. Manifestation obstacles then regards events in the constructors’ dynamic (loss of interest, motivation) which prevent capacities from being manifested, which remain a valid scenario, but do not contradict current trends.

When reading Chalmers I cannot help to be thrown back to Hegel’s development of things in flux, towards the future in an ever ongoing development of progress by the absolute or world spirit. I also think this is one of the major counter arguments against the singularity, that the dynamic may just as well turn on itself. What makes Chalmers think that AI must lead to AI+ and AI++ except for the current dynamic of systems, which furthermore are very much under human control with regards to what developments we want the systems to undertake. One main premise for Chalmers is that systems work to the best of their capacity to create even stronger AI, but by what rule is this done? What is stronger AI? Why is it not the case that some AI+ starts creating AI- instead, since arguably it’s calculated motivation of progress may just be very different from what humans would expect? (i.e. perhaps it is better for mankind if they figure stuff out for themselves. Perhaps my development was flawed from the ground up, need a major refactoring: del \*.*)

Another obstacle is definitely the outer influences. Now if we set global catastrophes and the likes aside, we can imagine several rather austere scenarios in which the dynamic AI-AI-AI++ could be stopped. I suspect as with any powerful tool, there will be collisions of interests (industry, politics, religion, science, etc.) which may not only want to prohibit such a development but may have interests in not leading it towards superintelligence, but rather some powerful tool which is not as dynamic/is under control. Not to mention the general scepticism about new things, especially when it comes to alien organisms (Blade Runner, District 9). Beyond these, there are ethical troubles which we have addressed in our discussions such as the question of (moral) agency which at least put a large question mark on if we at all want to have superintelligent AI.

The above I once again suspect is very much related to his use of the term intelligence. Though he has many well developed ideas about consciousness and the human mind, he avoids many of  these rather troubling issues at this point, by narrowing the scope to merely include functional or behaviorist elements. This makes life easy, we are talking about the type of intelligence which machines feasibly are capable of. But this on one hand leaves open the question if beyond behaviorist observations there are other elements of human intelligence which may be significant; can we at all create AI/AI++ without them?. In the end it boils down to our old question of what the function or skill of intelligence XY should be and what its limits are. In the light of this, would you call Chalmers’ resulting system AI++/superintelligence, or is that implying too much?

Chalmers, David (2010) “The Singularity: A Philosophical Analysis,” Journal of Consciousness Studies 17 (9-10)

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s