Several days ago I recounted how my thinking on the distinction between minds and manchines had evolved: The machine in my mind, my mind on the machine: Will we ever build a machine to equal the human brain? This is a conclusion to that. My point was simply that I have always believed that we cannot not make an artificial intelligence as powerful as a real human brain. Some other things are worth noting.
You can’t predict the course of intellectual development
This is well-known. But it’s one thing to know it from knowing intellectual history. Knowing it from having lived it is different.
Not being able to predict means you can’t project the future evolution of current lines of research. It also means that you can’t predict what new possibilities will appear out of nowhere. At the time in 1970s when I was predicting the development of a computer system capable to reading a Shakespeare play, I was not predicting that one day I would be thinking seriously about human origins and offer a concrete hypothesis. But that’s what I did in my book about music: Beethoven’s Anvil (2001).
The space gets larger
As we come to know more, the space gets larger. As a crude analogy, here’s something I stuck into my post on the meaning of “understand”:
Let's assume for a minute that we're going to rate understanding on a scale, say, from 1 to 10. GPT-3 rates, say, 3. Along comes Pathways and it's clearly better than GPT-3. Where does it go? 4? 5? 6?
No.
Given that considerable distance remains between Pathways and humans, I'd say that a 1-10 scale is insufficient. Let's make it 1-100. GPT-3 goes in at 23 and Pathways at, say, 38.
That is to say, each time one of these remarkable results comes in I think it enlarges our sense of the measure space. Maybe it even forces us to start adding dimensions. It just makes the world larger.
The space of possible models for human intelligence, not to mention models for artificial intelligence, is now much larger than it was in the 1970s. Will it keep getting larger and larger as our knowledge grows, or will the time come when we have bounded the space? How will we know?
The very idea of such a space, of course, implies computational understanding. It certainly isn’t a physical space. What kind of space is it? Conceptual? Imaginary? The very fact that we can conceive of such a space, imagine it, depends on computation. Computer programs can search spaces, at least in AI. But I don’t think the idea. Is due to AI. When I took a course in computer programming during my undergraduate years at Johns Hopkins we wrote a program to search for values on a hidden surface. [For that matter, we also did a program to play tic-tac-toe.]
My basic conceptual ontology remains
As far as I can tell, despite the collapse of my dream for that Shakespeare-reading computer system, my basic underlying conceptual ontology remains the same. That’s why I continue to believe that we will not be able to fully simulate a human brain in an artificial system.
We don’t have a language, a conceptual system, in which to describe that ontology, though we worked on it in Hays’s research group – see the concept of assignment in this working paper, Ontology in Cognition: The Assignment Relation and the Great Chain of Being, this technical report for the Center for Integrated Manufacturing at RPI, Ontology in Knowledge Representation for CIM, and this encyclopedia article, Ontology of Common Sense. I don’t know just when and how my ontology developed, but I’d point to my undergraduate years:
- Lévi-Strauss on myth
- taking a course in computer programming
- encountering Chomsky’s ideas about language
- reading Karl Pribram on neural holography in Scientific American in 1969, which linked up with Lévi-Strauss’s idea of the totemic operator
- encountering “Kubla Khan”
It was while working on a master’s thesis about “Kubla Khan” that all those things came together. That’s the underlying ontology that was in place when I met David Hays at Buffalo in spring on 1974. I tell that story in an article I published originally in 1975 and have since updated and revised, Touchstones.
