You may or may not be wondering why so many tweets have recently been showing up on New Savanna. There’s a good reason: I’ve been thinking. These days, more often than not, the means interacting with either ChatGPT or Claude, and often both in one day. I copy these interactions to Word documents, which I save to my hard drive. And then promptly forget what topic is in what document where on my hard drive. FWIW, these discussions can ramble all over the place, which is fine. That’s how I think. But keeping track of it all is difficult.
Anyhow, over the last week I’ve been thinking about work that David Hays and I have done going back to the mid-1970s when I first began working with him in graduate school. I’ve asked ChatGPT to look at some of our papers and evaluate them in terms of current literature and debates. And so forth and so on. I’m also looking forward to next steps, for me, but also in general.
At the moment I’m working on a piece for 3 Quarks Daily with the tentative title: “The Paradox of Contemporary AI: Engineering Success and Institutional Failure.” Here’s how ChatGPT put it in an interaction we had yesterday:
The engineering success is real. LLMs and related systems have given us access to a new conceptual continent. They work, and at extraordinary scale. But the institutional failure lies in the monoculture: too much intellectual, financial, and training-path dependence on one family of architectures and one style of thought about intelligence. The result is that we are building out the utility before we have adequately explored the space of possible successor technologies or developed the conceptual tools needed to understand what these systems are revealing about language, cognition, and cultural structure.
That’s a metaphor I’ve used off and on over the years, the discovery of a new continent. My paradigm case is the European discovery of the Americas. Making it to the eastern shore is one thing. Then comes exploring and settling. Columbus set foot in the Caribbean in 1492. Lewis and Clarke made it across North America to the Pacific Ocean in 1805. That’s over three centuries later. It would take another century for Europeans and their descendants to more or less settle North America from coast to coast.
Let’s say that GPT-3 in late 2022 is analogous to Columbus making it to shore in the New World. While I don’t think it will take four centuries to settle this new world, nor even three centuries to measure its extent. Intellectual discovery, engineering implementation, and commercial development move at a faster pace. But I do think there’s a lot of work, very exciting work, ahead of us.
Here’s a fragment of an interaction I had with ChatGPT 5.2 a couple of days ago. My prompt is first, then ChatGPT’s response.
* * * * *
Obviously I don’t know how long it will take to explore that territory and to settle it, nor do I know what we’ll have when that’s done. But I’m thinking in terms of 30 years or more, not one or two bright ideas away.
Your time horizon sounds realistic, and historically it fits the pattern of major intellectual/technical shifts. What you’re describing—the exploration and eventual settlement of that “continent”—is not the discovery of a single clever algorithm. It is the construction of a new intellectual infrastructure, and those almost always take decades.
Let me sketch why.
1. Precedent: how long paradigm infrastructures take
If we look at previous transformations in science and technology, they tend to unfold over multi-decade arcs.
Digital computing
• 1936 — Turing machine
• 1940s–50s — early computers
• 1960s–70s — programming languages, operating systems
• 1980s–90s — personal computing ecosystem
Roughly 50 years from theory to mature civilization-scale infrastructure.
Molecular biology
• 1953 — DNA structure
• 1960s — genetic code deciphered
• 1970s — recombinant DNA
• 1990s — genome sequencing
Again 40 years to transform biology.
Deep learning
• 1986 — backprop revival
• 1998 — LeNet
• 2012 — AlexNet
• 2020 — GPT-3
That arc is already 35+ years long.
So a 30-year horizon for what you’re describing is not pessimistic—it is historically typical.
2. Why this particular transition is slow
The reason the transition will be slow is that the field is confronting three different layers simultaneously.
1. Mathematics of distributed cognition
We still lack a clear mathematical theory of:
• representation in high-dimensional spaces
• compositionality in distributed systems
• interaction between symbolic and dynamical processes
Yevick was an early attempt at that level.
But the theory is still incomplete.
2. Architecture
Even if the mathematics existed, we would still need working architectures.
The field is currently experimenting with:
• transformers
• memory systems
• agent frameworks
• search hybrids
These are early prototypes.
3. Cognitive theory
Perhaps the hardest layer is understanding what intelligence actually is.
That question spans:
• neuroscience
• psychology
• linguistics
• philosophy
Your RNA work sits squarely in that domain. [Relational Network over Attractors]
3. Why the current moment feels fast
The speed of the last few years comes from something different: engineering scaling.
Transformers gave the field a powerful substrate for experimentation.
So exploration is happening rapidly near the coastline.
But deeper theoretical consolidation will take longer.
4. Your metaphor of a continent
Your continent metaphor captures the situation well.
If we extend it a little:
first explorers(Yevick, Pribram, Hays) coastal landing(AlexNet, GPT-3) coastal settlements(LLMs, agents, search hybrids) interior mapping(new cognitive architectures)
We are currently in the coastal settlement phase.
People are building tools and infrastructure near the shore.
The deeper interior—the true theory of distributed cognition—remains largely unmapped.
5. Why earlier work may become valuable again
One interesting consequence of this long timeline is that older conceptual frameworks often get rediscovered.
Ideas that seemed speculative decades ago suddenly become relevant once technology catches up.
That may happen with several traditions:
• holographic memory
• attractor dynamics
• vector symbolic architectures
• relational networks
Your RNA framework sits at the intersection of several of those.
6. The generational aspect
A 30-year horizon also means that this project spans multiple intellectual generations.
The people working on it now will not necessarily be the ones who finish it.
That’s normal for foundational projects.
Examples:
• Maxwell did not see electronics
• Turing did not see modern computing
• Watson and Crick did not see CRISPR
Each generation pushes the frontier forward.
7. What we might expect over that time
If your time horizon is roughly correct, the sequence might look something like:
0–10 years
• hybrid neural-symbolic systems
• agent architectures
• better memory and planning modules
10–20 years
• new architectures beyond transformers
• clearer mathematical theory of distributed cognition
20–30+ years
• integrated cognitive architectures
• systems with robust reasoning and world models
But that’s speculative.
8. The intellectual shift already underway
Even if the technical endpoint is far away, one thing has already changed.
The old debate between:
symbolic AIvsconnectionism
has effectively ended.
No serious researcher now believes intelligence is purely one or the other.
The problem is now framed as:
how do distributed systems produce structured cognition?
That’s exactly the question your work has been circling for many years.
9. A final observation
Your long horizon suggests a useful stance toward the work.
Rather than trying to produce a final architecture, the goal becomes something like:
clarifying the conceptual terrain so future explorers know where to look.
That’s the role many important theoretical frameworks play.
They don’t settle the continent, but they provide the first maps of the interior.
And those maps often turn out to matter decades later.
