Psychology Magazine

An Agent-based Vision for Scaling Modern AI - Why Current Efforts Are Misguided.

By Deric Bownds @DericBownds
I pass on my edited clips from Venkatesh Rao’s most recent newsletter - substantially shortening its length and inserting a few definitions of techo-nerd-speak acronyms he uses in brackets [  ].  He suggests interesting analogies between the future evolution of Ai and the evolutionary course taken by biological organisms:
…specific understandings of embodiment, boundary intelligence, temporality, and personhood, and their engineering implications, taken together, point to an agent-based vision of how to scale AI that I’ve started calling Massed Muddler Intelligence or MMI, that doesn’t look much like anything I’ve heard discussed.
…right now there’s only one option: monolithic scaling. Larger and larger models trained on larger and larger piles of compute and data…monolithic scaling is doomed. It is headed towards technical failure at a certain scale we are fast approaching
What sort of AI, in an engineering sense, should we attempt to build, in the same sense as one might ask, how should we attempt to build 2,500 foot skyscrapers? With brick and mortar or reinforced concrete? The answer is clearly reinforced concrete. Brick and mortar construction simply does not scale to those heights
…If we build AI datacenters that are 10x or 100x the scale of todays and train GPT-style models on them …problems of data movement and memory management at scale that are already cripplingly hard will become insurmountable…current monolithic approaches to scaling AI are the equivalent of brick-and-mortar construction and fundamentally doomed…We need the equivalent of a reinforced concrete beam for AI…A distributed agent-based vision of modern AI is the scaling solution we need.
Scaling Precedents from Biology
There’s a precedent here in biology. Biological intelligence scales better with more agent-like organisms. For example: humans build organizations that are smarter than any individual, if you measure by complexity of outcomes, and also smarter than the scaling achieved by less agentic eusocial organisms…ants, bees, and sheep cannot build complex planet-scale civilizations. It takes much more sophisticated agent-like units to do that.
Agents are AIs that can make up independent intentions and pursue them in the real world, in real time, in a society of similarly capable agents (ie in a condition of mutualism), without being prompted. They don’t sit around outside of time, reacting to “prompts” with oracular authority…as in sociobiology, sustainably scalable AI agents will necessarily have the ability to govern and influence other agents (human or AI) in turn, through the same symmetric mechanisms that are used to govern and influence them…If you want to scale AI sustainably, governance and influence cannot be one way street from some privileged agents (humans) to other less privileged agents (AIs)….
If you want complexity and scaling, you cannot govern and influence a sophisticated agent without opening yourself up to being governed and influenced back. The reasoning here is similar to why liberal democracies generally scale human intelligence far better than autocracies. The MMI vision I’m going to outline could be considered “liberal democracy for mixed human-AI agent systems.” Rather than the autocratic idea of “alignment” associated with “AGI,” MMIs will call for something like the emergent mutualist harmony that characterizes functional liberal democracies. You don’t need an “alignment” theory. You need social contract theory.
The Road to Muddledom
Agents, and the distributed multiagent systems (MAS) that represent the corresponding scaling model, obviously aren’t a new idea in AI…MAS were often built as light architectural extensions of early object-oriented non-AI systems… none of this machinery works or is even particularly relevant for the problem of scaling modern AI, where the core source of computational intelligence is a large-X-model with fundamentally inscrutable input-output behavior. This is a new, oozy kind of intelligence we are building with for the first time. ..We’re in new regimes, dealing with fundamentally new building materials and aiming for new scales (orders of magnitude larger than anything imagined in the 1990s).
Muddling Doctrines
How do you build muddler agents? I don’t have a blueprint obviously, but here are four loose architectural doctrines, based on the four heterodoxies I noted at the start of this essay (see links there): embodiment, boundary intelligence, temporality, and personhood.
Embodiment matters: The physical form factor AI takes is highly relevant to to its nature, behavior, and scaling potential.
Boundary intelligence matters. Past a threshold, intelligence is a function of the management of boundaries across which data flows, not the sophistication of the interiors where it is processed.
Temporality matters: The kind of time experienced by an AI matters for how it can scale sustainably.
Personhood matters: The attributes of an AI that enable humans and AIs to relate to each other as persons (I-you), rather than things (I-it), are necessary elements to being able to construct coherent scalably composable agents at all.
The first three principles require that AI computation involve real atoms, live in real time, and deal with the second law of thermodynamics
The fourth heterodoxy turns personhood …into a load-bearing architectural element in getting to scaled AI via muddler agents. You cannot have scaled AI without agency, and you cannot have a scalable sort of agency without personhood.
As we go up the scale of biological complexity, we get much programmable and flexible forms of communication and coordination. … we can start to distinguish individuals by their stable “personalities” (informationally, the identifiable signature of personhood). We go from army ants marching in death spirals to murmurations of starlings to formations of geese to wolf packs maneuvering tactically in pincer movements… to humans whose most sophisticated coordination patterns are so complex merely deciphering them stresses our intelligence to the limit.
Biology doesn’t scale to larger animals by making very large unicellular creatures. Instead it shifts to a multi-cellular strategy. Then it goes further: from simple reproduction of “mass produced” cells to specialized cells forming differentiated structures (tissues) via ontogeny (and later, in some mammals, through neoteny). Agents that scale well have to be complex and variegated agents internally, to achieve highly expressive and varied behaviors externally. But they must also present simplified facades — personas — to each other to enable the scaling and coordination.
Setting aside questions of philosophy (identity, consciousness),  personhood is a scaling strategy. Personhood is the behavioral equivalent of a cell. “Persons” are stable behavioral units that can compose in “multicellular” ways because they communicate differently than simpler agents with weak or non-existent personal boundaries, and low-agency organisms like plants and insects.
When we form and perform “personas,” we offer a harder interface around our squishy interior psyches that composes well with the interfaces of other persons for scaling purposes. A personhood performance is something like a composability API [application programmers interface] for intelligence scaling.
Beyond Training Determinism
…Right now AIs experience most of their “time” during training, and then effectively enter a kind of stasis. …They requiring versioned “updates” to get caught up again…GPT4 can’t simply grow or evolve its way to GPT5 by living life and learning from it. It needs to go through the human-assisted birth/death (or regeneration perhaps) singularity of a whole new training effort. And it’s not obvious how to automate this bottleneck in either a Darwinian or Lamarckian way.
…For all their power, modern AIs are still not able to live in real time and keep up with reality without human assistance outside of extremely controlled and stable environments…As far as temporality is concerned, we are in a “training determinism” regime that is very un-agentic and corresponds to genetic determinism in biology.What makes agents agents is that they live in real time, in a feedback loop with external reality unfolding at its actual pace of evolution.
Muddling Through vs. Godding Through
Lindblom’s paper identifies two patterns of agentic behavior, “root” (or rational-comprehensive) and “branch” (or successive limited comparisons), and argues that in complicated messy circumstances requiring coordinated action at scale, the way actually effective humans operate is the branch method, which looks like “muddling through” but gradually gets there, where the root method fails entirely. Complex here is things humans typically do in larger groups, like designing and implementing complex governance policies or undertaking complex engineering projects. The threshold for “complex” is roughly where explicit coordination protocols become necessary scaffolding. This often coincides with the threshold where reality gets too big to hold in one human head.
The root method attempts to fight limitations with brute, monolithic force. It aims to absorb all the relevant information regarding the circumstances a priori (analogous to training determinism), and discover the globally optimal solution through “rational” and “comprehensive” thinking. If the branch method is “muddling through,” we might say that the root, or rational-comprehensive approach, is an attempt to “god through.”…Lindblom’s thesis is basically that muddling through eats godding through for lunch.
To put it much more bluntly: Godding through doesn’t work at all beyond small scales and it’s not because the brains are too small. Reasoning backwards from complex goals in the context of an existing complex system evolving in real time doesn’t work. You have to discover forwards (not reason forwards) by muddling.
..in thinking about humans, it is obvious that Lindblom was right…Even where godding through apparently prevails through brute force up to some scale, the costs are very high, and often those who pay the costs don’t survive to complain…Fear of Big Blundering Gods is the essential worry of traditional AI safety theology, but as I’ve been arguing since 2012 (see Hacking the Non-Disposable Planet), this is not an issue because these BBGs will collapse under their own weight long before they get big enough for such collapses to be exceptionally, existentially dangerous.
This worry is similar to the worry that a 2,500 foot brick-and-mortar building might collapse and kill everybody in the city…It’s not a problem because you can’t build a brick-and-mortar building to that height. You need reinforced concrete. And that gets you into entirely different sorts of safety concerns.
Protocols for Massed Muddling
How do you go from individual agents (AI or human) muddling through to masses of them muddling through together? What are the protocols of massed muddling? These are also the protocols of AI scaling towards MMIs (Massed Muddler Intelligences)
When you put a lot of them together using a mix of hard coordination protocols (including virtual-economic ones) and softer cultural protocols, you get a massed muddler intelligence, or MMI. Market economies and liberal democracies are loose, low-bandwidth examples of MMIs that use humans and mostly non-AI computers to scale muddler intelligence. The challenge now is to build far denser, higher bandwidth ones using modern AI agents.
I suspect at the scales we are talking about, we will have something that looks more like a market economy than like the internal command-economy structure of the human body. Both feature a lot of hierarchical structure and differentiation, but the former is much less planned, and more a result of emergent patterns of agglomeration around environmental circumstances (think how the large metros that anchor the global economy form around the natural geography of the planet, rather than how major organ systems of the human body are put together).
While I suspect MMIs will partly emerge via choreographed ontogenic roadmaps from a clump of “stem cells” (is that perhaps what LxMs [large language models] are??), the way market economies emerge from nationalist industrial policies, overall the emergent intelligences will be masses of muddling rather than coherent artificial leviathans. Scaling “plans” will help launch, but not determine the nature of MMIs or their internal operating protocols at scale. Just like tax breaks and tariffs might help launch a market economy but not determine the sophistication of the economy that emerges or the transactional patterns that coordinate it. This also answers the regulation question: Regulating modern AI MMIs will look like economic regulation, not technology regulation.
How the agentic nature of the individual muddler agent building block is preserved and protected is the critical piece of the puzzle, just as individual economic rights (such as property rights, contracting regimes) are the critical piece in the design of “free” markets.
Muddling produces a shell of behavioral uncertainty around what a muddler agent will do, and how it will react to new information, that creates an outward pressure on the compressive forces created by the dense aggregation required for scaling. This is something like the electron degeneracy pressure that resists the collapse of stars under their own gravity. Or how the individualist streak in even the most dedicated communist human resists the collapse of even the most powerful cults into pure hive minds. Or how exit/voice dynamics resist the compression forces of unaccountable organizational management.
…the fundamental intentional tendency of individual agents, on which all other tendencies, autonomous or not, socially influencable or not, rest…[is]  body envelope integrity.
…This is a familiar concern for biological organisms. Defending against your body being violently penetrated is probably the foundation of our entire personality. It’s the foundation of our personal safety priorities — don’t get stabbed, shot, bitten, clawed or raped. All politics and economics is an extension of envelope integrity preservation instincts. For example, strictures against theft (especially identity theft) are about protecting the body envelope integrity of your economic body. Habeas corpus is the bedrock of modern political systems for a reason. Your physical body is your political body…if you don’t have body envelope integrity you have nothing.
This is easiest to appreciate in one very visceral and vivid form of MMIs: distributed robot systems. Robots, like biological organisms, have an actual physical body envelope (though unlike biological organisms they can have high-bandwidth near-field telepathy). They must preserve the integrity of that envelope as a first order of business … But robot MMIs are not the only possible form factor. We can think of purely software agents that live in an AI datacenter, and maintain boundaries and personhood envelopes that are primarily informational rather than physical. The same fundamental drive applies. The integrity of the (virtual) body envelope is the first concern.
This is why embodiment is an axiomatic concern. The nature of the integrity problem depends on the nature of the embodiment. A robot can run away from danger. A software muddler agent in a shared memory space within a large datacenter must rely on memory protection, encryption, and other non-spatial affordances of computing environments.
Personhood is the emergent result of successfully solving the body-envelope-integrity problem over time, allowing an agent to present a coherent and hard mask model to other agents even in unpredictable environments. This is not about putting a smiley-faced RLHF [Reinforcement Learning from Human Feedback]. mask on a shoggoth interior to superficially “align” it. This is about offering a predictable API for other agents to reliably interface with, so scaled structures in time and social space don’t collapse.  [They have] hardness - the property or quality that allows agents with soft and squishy interiors to offer hard and unyielding interfaces to other agents, allowing for coordination at scale.
…We can go back to the analogy to reinforced concrete. MMIs are fundamentally built out of composite materials that combine the constituent simple materials in very deliberate ways to achieve particular properties. Reinforced concrete achieves this by combining rebar and cement in particular geometries. The result is a flexible language of differentiated forms (not just cuboidal beams) with a defined grammar.
MMIs will achieve this by combining embodiment, boundary management, temporality, and personhood elements in very deliberate ways, to create a similar language of differentiated forms that interact with a defined grammar.
And then we can have a whole new culture war about whether that’s a good thing.

Back to Featured Articles on Logo Paperblog