[ CogSci Summaries home | UP | email ]
http://www.cc.gatech.edu/~jimmyd/summaries/

Goertzel, B. (2007). Human-level artificial general intelligence and the possibility of a technological singularity. A reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil. Artificial intelligence, 171, 1161-1173.

@Article{Goertzel2007,
  author = 	 {Ben Goertzel},
  title = 	 {Human-level artificial general intelligence and the 
possibility of a technological singularity. A reaction to Ray Kurzweil's 
The Singularity Is Near, and McDermott's critique of Kurzweil},
  journal = 	 {Artificial Intelligence},
  year = 	 {2007},
  volume = 	 {171},
  pages = 	 {1161--1173},
}

Author of the summary: Martin Proulx, 2007, martinproulx22@hotmail.com

Cite this paper for:

The actual paper can be found at this link (need authentication).

The AI field has gone through periods of growth and decline. [p1161] This is partly due to overoptimistic promises by AI researchers and the difficulty of modeling basic human capabilities (ex: perception).

Most current AI research is done on narrowly-defined problem domains ("narrow AI"). [p1162] But recent AI optimism can be seen in the growing number of conferences on "Human-Level AI". Futuristic pundits like Ray Kurzweil are making more and more predictions about the future in AI.

Kurzweil distinguishes between "narrow AI" and "strong AI" (AI that matches or exceeds human intelligence). The author prefers "Artificial General Intelligence" (AGI) over "strong AI" since it distinguishes it from Searle's definition. He also prefers it over "Human-Level AI" since humans might not have the smartest minds and defining what would be at the "human level" is hard. [p1163]

AGI should not be at the margins of AI research because medium and long term views can be important in science (ex: physics and unified theories, gene-therapy based medicine, quantum computing). Progress on AGI in the near future is reasonably likely

Making long-term predictions about AGI is very hard and complicated. Scenario analysis is good when dealing with high levels of uncertainty. It allows one to lay out a series of specific future scenarios for a complex system. [p1164] Teams of AI researchers, technology pundits and social and business leaders can use their collective intuition, knowledge and expertise to flesh out a variety of scenarios.

Here are some plausible categories of scenarios:

  1. Steady Incremental Progress Scenarios:
  2. Dead-End Scenarios:
  3. AGI-Based Singularity Scenarios:
  4. Skynet Scenario: [p1165]
  5. Kurzweil Scenario:
  6. Sysop Scenario:
  7. AI Big Brother Scenario:
  8. Singularity Steward Scenario: [p1166]
  9. Coherent Extrapolated Volition Scenario:

In his book, Kurzweil gives estimates of the probability of the Singularity scenario and en expected timeline. Since the human mind tends to be overconfident, it would be more interesting if these predictions came with confidence intervals.

Critique of Kurzweil by McDermott and replies by Goertzel:

  1. Kurzweil does not give any proof that an AI Singularity is upon us.
  2. Even if we succeed in scanning the brain into a computer, we still won't understand human intelligence. [p1167]
  3. Kurzweil says that machines will augment their capacities without limit, but this is unrealistic.

Kurzweil's route toward Singularity-enabling AGI can be summed up as scanning human brains, creating brain emulations, studying these emulations and creating AGI systems capable of self-improvement.

The author considers virtual-world embodiment to be an alternative path to human-level AGI. AI learning systems could be connected to virtual agents. [p1170]

For example, talking parrots with their own individual knowledge and habits could be added to Second Life. They could have an adaptive language learning algorithm with contextulal learning of language rules and constructs. Motivational factors for correct language use would be built-in. They would learn through interactions with thousands of human avatars and benefit from the "wisdom of crowds".

Humanoid avatars could also be used (ex: baby). [p1171] They could adaptively explore their online virtual worlds and gather information according to their goals using language. Although they would lack the initial set of biases that humans possess, they would benefit from having thousands of teachers.

This vast learning by virtually-embodied AGI systems could happen earlier than full-scale human-brain emulation. As they become progressively more intelligent, they would become more integrated in our social networks. This could correspond to the "AI Big Brother" or the "Singularity Steward" scenarios, where the AGI systems interact closely with human society. [p1172]

In conclusion, the author notes that Kurzweil is doing well to get the public enthused about the possibilities of AGI. But he is too confident in his predictions, unlike Vernor Vinge who emphasizes the unknowability of what is to come. Nonetheless, the hypothesis of the Singularity is a serious one. Creating AGI will be difficult but it might be achievable within our lifetimes.


Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
JimDavies (jim@jimdavies.org)