I’m not a great fan of Kevin Kelly’s brand of futurism, but this is a great essay by him on the problems that arise when thinking about artificial intelligence begins with what the Marxists used to call “false reification”: the belief that intelligence is a bounded and unified concept that functions like a thing. Or, to put Kelly’s point a different way, it is an error to think that human beings exhibit a “general purpose intelligence” and therefore an error to expect that artificial intelligences will do the same.

Kelly opposes to this reifying orthodoxy in AI efforts five affirmations of his own:

  1. Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
  2. Humans do not have general purpose minds, and neither will AIs.
  3. Emulation of human thinking in other media will be constrained by cost.
  4. Dimensions of intelligence are not infinite.
  5. Intelligences are only one factor in progress.

Expanding on that first point, Kelly writes,

Intelligence is not a single dimension. It is a complex of many types and modes of cognition, each one a continuum. Let’s take the very simple task of measuring animal intelligence. If intelligence were a single dimension we should be able to arrange the intelligences of a parrot, a dolphin, a horse, a squirrel, an octopus, a blue whale, a cat, and a gorilla in the correct ascending order in a line. We currently have no scientific evidence of such a line. One reason might be that there is no difference between animal intelligences, but we don’t see that either. Zoology is full of remarkable differences in how animals think. But maybe they all have the same relative “general intelligence?” It could be, but we have no measurement, no single metric for that intelligence. Instead we have many different metrics for many different types of cognition.

Think, to take just one example, of the acuity with which dogs observe and respond to a wide range of human behavior: they attend to tone of voice, facial expression, gesture, even subtle forms of body language, in ways that animals invariably ranked higher on what Kelly calls the “mythical ladder” of intelligence (chimpanzees, for instance) are wholly incapable of. But dogs couldn’t begin to use tools the way that many birds, especially corvids, can. So what’s more intelligent, a dog or a crow or a chimp? It’s not really a meaningful question. Crows and dogs and chimps are equally well adapted to their ecological niches, but in very different ways that call forth very different cognitive abilities.

If Kelly is right in his argument, then AI research is going to be hamstrung by its commitment to g or “general intelligence,” and will only be able to produce really interesting and surprising intelligences when it abandons the idea, as Stephen Jay Gould puts is in his flawed but still-valuable The Mismeasure of Man, that “intelligence can be meaningfully abstracted as a single number capable of ranking all people [including digital beings!] on a linear scale of intrinsic and unalterable mental worth.”

“Mental worth” is a key phrase here, because a commitment to g has been historically associated with explicit scales of personal value and commitment to social policies based on those scales. (There is of course no logical link between the two commitments.) Thus the argument frequently made by eugenicists a century ago that those who score below a certain level on IQ tests — tests purporting to measure g — should be forcibly sterilized. Or Peter Singer’s view that he and his wife would be morally justified in aborting a Down syndrome child simply because such a child would probably grow up to be a person “with whom I could expect to have conversations about only a limited range of topics,” which “would greatly reduce my joy in raising my child and watching him or her develop.” A moment’s reflection should be sufficient to dismantle the notion that there is a strong correlation between, on the one hand, intellectual agility and verbal fluency and, on the other, moral excellence; which should also undermine Singer’s belief that a child who is deficient in his imagined general intelligence is ipso facto a person he couldn’t “treat as an equal.” But Singer never gets to that moment of reflection because his rigid and falsely reified model of intellectual ability, and the relations between intellectual ability and personal value, disables his critical faculties.

If what Gould in another context called the belief that intelligence is “an immutable thing in the head” which allows “grading human beings on a single scale of general capacity” is both erroneous and pernicious, it is somewhat disturbing to see that belief not only continuing to flourish in some communities of discourse but also being extended into the realm of artificial intelligence. If digital machines are deemed superior to human beings in g, and if superiority in g equals greater intrinsic worth…. well, the long-term prospects what what Greg Egan calls “fleshers” aren’t great. Unless you’re one of the fleshers who controls the machines. For now.

P.S. I should add that I know that people who are good at certain cognitive tasks tend to be good at other cognitive tasks, and also that, as Freddie DeBoer points out here, IQ tests — that is, tests of general intelligence — have predictive power in a range of social contexts, but I don’t think any of that undermines the points I’m making above. Happy to be corrected where necessary, of course.

2 Comments

Comments are closed.