I sometimes come across articles like this, where things like the following are said:
He said that plants used information encrypted in the light to immunise themselves against seasonal pathogens.
"Every day or week of the season has… a characteristic light quality," Professor Karpinski explained.
“So the plants perform a sort of biological light computation, using information contained in the light to immunise themselves against diseases that are prevalent during that season."
Professor Christine Foyer, a plant scientist from the
"Plants have to survive stresses, such as drought or cold, and live through it and keep growing," she told BBC News.
"This requires an appraisal of the situation and an appropriate response - that's a form of intelligence.
The move I want to highlight is in the last line: why insist that this is a form of intelligence? It seems to me that “intelligence” is a “status” term: when we apply it to something, we raise (or lower) the status of that something, without necessarily adding to our comprehension of the phenomenon (the recognition of this problem is very old; see my post below on Plato and the idea of distinguishing animals from humans through the criterion of “reason”). If the experiments described are appropriate, it seems clear enough that the plants are performing some kind of computation; but in saying that they thereby have some kind of intelligence we do not learn something more about the plants. At best, we only learn about an attempt to increase the status of the research – to make it seem “sexier” perhaps? – and of the researchers.
The same point could be applied with respect to discussions of human and artificial intelligence. Human intelligence, in its many varieties, seems to be simply a form of computation, distinguishable in degree from other forms of computation but not necessarily in kind: it is a complex of abilities, more or less integrated, for pattern recognition, actuarial calculation of costs and benefits of action, social relationship tracking, and the like, augmented by social rules and institutions (themselves the planned or unplanned emergent consequences of interaction). Some of these abilities can be more or less functionally replicated in modern computer hardware (e.g., the ability to play chess), whereas others cannot, at least not yet or with current techniques. Yet when we ascribe intelligence to people or things, we do more than express that they are able to engage in particular forms of computation. Rather, we are for the most part raising (or lowering) the status of whatever it is that we say has “intelligence.” A sign of this is the way in which the term is often used in a proprietary manner: people may object to descriptions of human intelligence that reduce it to computation, or to descriptions of artificial forms of computation as intelligence. Or people become hugely exercised about ideas about intelligence differentials – either to stress that such differences exist or to deny that they exist or matter.
The problem with ascribing intelligence to various computational systems seems to be less the idea that intelligence is a form of computation than the reduction in status accompanying this reduction: to say that human intelligence is merely a form of computation seems demeaning in some ways, since it diminishes the “scarcity” of intelligence as a distinctively human possession, or makes it less special. But if intelligence is computation, then it can be found, to different degrees of “intensity” and “specialization,” in all kinds of systems: plants, animals, insect colonies, social institutions, and of course suitably programmed computers. Some forms of computation in these systems might be more powerful, for particular purposes, than the grab-bag of abilities powered by the human brain; and some forms of computation may be more or less exploitable by individual human beings (your own brain power is usually exploitable, but the computational abilities of other systems may not be so easily exploitable by people in your position). Moreover, computation is always relative to problems: some systems are more effective at computing solutions to certain problems than others. And though there is such a thing as universal computation, the abstract notion of the universal Turing machine does not imply anything about the ability of universal computers to successfully solve a particular problem in a limited time frame – that always depends on the particular “program.” If the human brain seems unusual, it is because it seems to be able to solve a very large number of problems, either by itself, or in conjunction with tools and institutions that amplify and distribute the computing power of human society.
It seems to me that if we could banish the term intelligence and substitute for it a more status-neutral term, a lot of discussions about cognition and “reason” could become clearer (at least until the new term acquired a sufficient positive or negative status valence). Indeed, this seems to have been one purpose of Turing’s “imitation game”: to break down the preoccupation with demarcating the “intelligent” from the “non-intelligent,” by substituting computation for intelligence as the more interesting concept from a theoretical perspective. There is a quote that appears as the epigraph to Accelerando, from the AI pioneer Edsger W. Djikstra, that expresses this attitude perfectly: “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” The question of whether X is intelligent is often no more interesting than the question of whether a submarine can swim: what may matter is the kind of computation that it is able to perform, either by itself, or in conjunction with other agents.