General Intelligence

A machine with general intelligence can solve a wide variety of problems with a breadth and versatility similar to human intelligence. There are several competing ideas about how to develop artificial general intelligence. Hans Moravec and Marvin Minsky argue that work in different individual domains can be incorporated into an advanced multi-agent system or cognitive architecture with general intelligence. Pedro Domingos hopes that there is a conceptually straightforward, but mathematically difficult, “master algorithm” that could lead to AGI. Others believe that anthropomorphic features like an artificial brain or simulated child development will someday reach a critical point where general intelligence emerges.

Artificial general intelligence (AGI) is the hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, full AI, or general intelligent action (although some academic sources reserve the term “strong AI” for computer programs that experience sentience or consciousness.)

general intelligence

In contrast to strong AI, weak AI or “narrow AI” is not intended to have general cognitive abilities; rather, weak AI is any program that is designed to solve exactly one problem. (Academic sources reserve “weak AI” for programs that do not experience consciousness or do not have a mind in the same sense people do.)

 

Characteristics of General Intelligence

Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone.

General Intelligence traits

However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:

  • reason, use strategy, solve puzzles, and make judgments under uncertainty
  • represent knowledge, including common sense knowledge
  • plan
  • learn
  • communicate in natural language.

and integrate all these skills towards common goals. Other important capabilities include:

  • input as the ability to sense (e.g. see, hear, etc.), and
  • output as the ability to act (e.g. move and manipulate objects, change own location to explore, etc.)

in this world where intelligent behaviour is to be observed.  This would include an ability to detect and respond to hazard.  Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.

Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but no one has created an integrated system that excels at all these areas.

 

AI-complete problems

There are many individual problems that may require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s original intent (social intelligence). All of these problems need to be solved simultaneously in order to reach human-level machine performance.

A problem is informally known as “AI-complete” or “AI-hard”, if solving it is equivalent to the general aptitude of human intelligence, or strong AI, and is beyond the capabilities of a purpose-specific algorithm. AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem.

AI-complete problems cannot be solved with current computer technology alone, and require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.

 

Brain Simulation

A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably. 

Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.

This website uses cookies. By continuing to use this site, you accept our use of cookies.