Culture Magazine

How Smart Could an A.I. Be? Intelligence in a Network of Human and Machine Agents

By Bbenzon @bbenzon

This continues the line of thinking I began with Intelligence, A.I. and analogy: Jaws & Girard, kumquats & MiGs, double-entry bookkeeping & supply and demand, which was focused specifically on analogical thinking. I now want to consider thinking more generally.

The general problem with thinking about AGI (artificial general intelligence) and superintelligence is that the idea of intelligence itself is vague. We’ve got the general idea that intelligence is the ability to solve a wide range of problems in a wide range of environments, which is a rather vague notion. There is another notion, independent of that, that conceives of intelligence as being to cognitive performance as horsepower is to engine performance. Conceived this way intelligence is a scaler quantity. That’s convenient, but not very convincing. Still...

Let’s start with that second idea. One corollary I’ve seen here and there is that a superintelligent AI would be to us as we are to, say, a mouse, or a bird, a fish, whatever animal you choose. The point seems to be that the intelligence “ceiling” of animals is fixed by their biology and is well below the intelligence ceiling of humans. And so it is with humans and a Superintelligent AI.

But is it actually the case that the intelligence ceiling of humans is fixed by human biology? Newton is able to solve problems that are beyond Aristotle, and Aristotle is able to solve problems that are beyond that of the most skilled hunter-gatherer. What is more, a merely competent college undergraduate in the current world is able to learn Newton’s concepts and methods and solve the same problems that Newton. That same college undergraduate can even solve problems beyond Newton’s competence. Why? Because physics did not stop with Newton. Our college undergraduate will have learned some of that more advanced physics and therefore have problem-solving capacities beyond those of Newton.

We have no reason believe that the biological aspect of human intelligence has increased over time. But there is a cultural aspect, and that has changed. Human intelligence is not fixed in the way that animal intelligence is. David Hays have published a series of articles about this process; the central article is The Evolution of Cognition (1990). In that article we also suggested that there is no reason to believe that the process has come to a halt. Cultural evolution seems to be ongoing.

The long-term evolution of human culture suggests that human intelligence is not properly conceived of as a function some biologically given computational capacity, for that biological capacity seems to have remained constant while our ability to solve problems has increased enormously. The way in which that capacity is organized would seem to be important – which is the foundation of the article Hays and I made. I note further, and this is not something that Hays and I discussed directly, that as the human capacity for problem-solving has increased, that capacity has become more and more a collective one. To a first approximation, every adult in a hunter-gatherer society possesses the full inventory of that society’s knowledge – though we have to allow for differences between male and female knowledge and some specialized knowledge for shamans and story-tellers. That changes with more advanced forms of social organization where knowledge becomes specialized. Knowledge has become very specialized indeed in our current world. Any number of problems now require interaction among diverse teams of specialists.

So, let us think in terms of problem-solving by networks of specialized solvers. Some of those solvers are human, but some will be machines. Such man-machine problem-solving networks are ubiquitous in the modern world and they solve problems well-beyond the capacity of individual humans. They aren’t what most AI experts have in mind when then talk about superintelligence, but it’s not clear to me that we can simply ignore them in these discussions. They are, after all, how many very important problems get solved. Henry Farrell and Cosma Shalizi have made this argument in The Economist (here’s an ungated and somewhat longer version, and here as well, where it is followed by a brief discussion).

I assume that such man-machine networks will proliferate in the future. Some of the nodes in these networks will be machines and some will be humans. The question of AGI then becomes:

Will there ever come a time when the tasks of every node in such problems-solving networks can be executed by a computer system that is as capable as any human?

Note that it is possible that some tasks will require manipulation of the physical world that is of such a nature that humans are better at it than any machine. Would we say that the existence such nodes is evidence only of physical skill, but not of intelligence?

The question of machine superintelligence would then become:

Will there ever come a time when we have problems-solving networks where there exists at least one node that is assigned to a non-routine task, a creative task, if you will, that only a computer can perform?

That’s an interesting question. I specify non-routine task because we have all kinds of computing systems that are more effective at various tasks than humans are, from simple arithmetic calculations to such things solving the structure of a protein string. I fully expect the more and more systems will evolve that are capable of solving such sophisticated, but ultimately routine, problems. But it’s not at all obvious to me that computational systems will eventually usurp all problem-solving tasks.

Remember, that even as we’re developing ever more capable AI systems, we are also developing more sophisticated modes of human problem solving. It’s not at all obvious that machines will necessarily out-run us. Take a look at the analogy paper I linked in the first paragraph for something to think about in this context. In particular, take a look at my remarks about epistemological independence near the end of the discussion of the analogy between double-entry bookkeeping and supply and demand. For that matter, my remarks on ring-composition in this piece are worth thinking about as well.

More later.


Back to Featured Articles on Logo Paperblog