Culture Magazine

On Possible Cross-fertilization Between AI and Neuroscience

By Bbenzon @bbenzon

MIT Center for Minds, Brains, and Machines (CBMM), a panel discussion: CBMM10 - A Symposium on Intelligence: Brains, Minds, and Machines.

On which critical problems should Neuroscience, Cognitive Science, and Computer Science focus now? Do we need to understand fundamental principles of learning -- in the sense of theoretical understanding like in physics -- and apply this understanding to real natural and artificial systems? Similar questions concern neuroscience and human intelligence from the society, industry and science point of view.

Panel Chair: T. Poggio
Panelists: D. Hassabis, G. Hinton, P. Perona, D. Siegel, I. Sutskever

Quick Comments

1.) I’m a bit annoyed that Hassabis is giving neuroscience credit for the idea of episodic memory. As far as I know, the term was coined by a cognitive psychologist named Endel Tulving in the early 1970s, who stood it in opposition to semantic memory. That distinction was all over the place in the cognitive sciences in the 1970s and its second nature to me. When ChatGPT places a number of events in order to make a story, that’s episodic memory.

2.) Rather than theory, I like to think of what I call speculative engineering. I coined the phrase in the preface to my book about music (Beethoven’s Anvil), where I said:

Engineering is about design and construction: How does the nervous system design and construct music? It is speculative because it must be. The purpose of speculation is to clarify thought. If the speculation itself is clear and well-founded, it will achieve its end even when it is wrong, and many of my speculations must surely be wrong. If I then ask you to consider them, not knowing how to separate the prescient speculations from the mistaken ones, it is because I am confident that we have the means to sort these matters out empirically. My aim is to produce ideas interesting, significant, and clear enough to justify the hard work of investigation, both through empirical studies and through computer simulation.

3.) On Chomsky (Hinton & Hassabis): Yes, Chomsky is fundamentally wrong about language. Language is primarily a tool for conveying meaning from one person to another and only derivatively a tool for thinking. And he’s wrong that LLMs can learn any language and therefore they are useless for the scientific study of language. Another problem with Chomsky’s thinking is that he has no interest in process, which is in the realm of performance, not competence.

Let us assume for the sake of argument that the introduction of a single token into the output stream requires one primitive operation of the virtual system being emulated by an LLM. By that I mean that there is no logical operation within the process, no AND or OR, no shift of control; all that’s happening is one gigantic calculation involving all the parameters in the system. That means that the number of primitive operations required to produce a given output is equal to the number of tokens in that output. I suggest that that places severe constraints on the organization of the LLM’s associative memory.

Contrast that with what happens in a classical symbolic system. Let us posit that each time a word (not quite the same as a token in an LLM, but the difference is of no consequence) is emitted, that itself requires a single primitive operation in the classical system. Beyond that, however, a classical system has to execute numerous symbolic operations in order to arrive at each word. Regardless of just how those operations resolve into primitive symbolic operations, the number has to be larger, perhaps considerably larger, than the number of primitive operations an LLM requires. I suggest that this process places fewer constraints on the organization of a symbolic memory system.

At this point I’ve reached 45:11 in the video, but I have to stop and think. Perhaps I’ll offer some more comments later.


Back to Featured Articles on Logo Paperblog