I just had an interesting interaction with Claude 3.5 Sonnet. As always, my remarks are boldface while Claude's are plain.
* * * * *
I believe that neural-net based chess programs, such as AlphaZero, have been observed to play chess differently than humans. Is that so?
Yes, this is a fascinating observation that's been widely discussed in the chess community. Neural network-based chess engines like AlphaZero have demonstrated some distinctive playing patterns that differ from both traditional chess engines and human players.
Some key differences include:
1. Piece sacrifices: AlphaZero has shown a greater willingness to make long-term positional sacrifices, sometimes giving up material for positional advantages that only become clear many moves later. These sacrifices often appear counterintuitive to human players.
2. Prophylactic moves: The program often makes seemingly quiet moves that prevent opponent's plans before they can even develop, showing a deep positional understanding that differs from the more tactical focus of traditional engines.
3. Piece placement: AlphaZero sometimes places pieces in what human players would consider unusual or even awkward squares, but which later prove to be strategically important.
4. Flank attacks: The program has demonstrated a particular affinity for attacking on the flanks and undertaking long-term strategic operations, rather than focusing on immediate tactical opportunities.
Perhaps most interestingly, AlphaZero achieved this playing style without being trained on human games - it learned purely through self-play. This suggests that what we consider "natural" or "human-like" chess might actually be influenced by centuries of human chess tradition and limitations, rather than representing the most effective way to play the game.
Would you be interested in discussing any specific aspects of these playing patterns? I find the implications for our understanding of chess strategy particularly fascinating.
I’m wondering if humans have been able to learn these new techniques.
Yes, the influence of neural network-based chess engines like AlphaZero has begun to reshape human chess at the highest levels. Grandmasters have started incorporating some of these novel approaches into their play, though often with a more selective and cautious adoption.
For example, Magnus Carlsen, widely considered one of the greatest chess players ever, has explicitly acknowledged studying and learning from AlphaZero's games. He's noted how it's influenced his willingness to play certain types of positions that he might have previously considered dubious.
However, there are some important limitations to how much humans can directly copy these approaches:
1. Humans can't calculate as deeply or precisely as neural networks, so we can't always justify the same risky sacrifices that AlphaZero makes.
2. The human mind is better suited to understanding chess through general principles and patterns rather than the concrete move-by-move precision that neural networks excel at.
3. Some of AlphaZero's more counterintuitive moves require an extreme degree of accuracy in the follow-up play that's difficult for humans to maintain.
What we're seeing instead is a kind of selective adaptation - human players are incorporating these insights to expand their strategic understanding while still playing in a fundamentally human way. For instance, they might be more open to positions with long-term compensation for material, but they'll typically only choose such lines when the compensation is more concrete and calculable than the extremely abstract advantages AlphaZero sometimes plays for.
Would you be interested in exploring any specific examples of how top players have adapted these computer-inspired innovations?
What interests me is the question of whether or not humans are able to learn these techniques or whether they’re more or less permanently beyond our grasp. 1 and 3 seem to be related and 1, in particular, seems grounded in capacities computers have but that humans will never have. 2 seems to be a different kind of factor.
This is a really insightful distinction you're drawing. You're right - there seem to be two fundamentally different types of barriers here:
1. Computational barriers (your points about #1 and #3): These are hard limits based on our brains' processing power. We simply cannot calculate 20 moves ahead with perfect accuracy like a computer can. This isn't something we can learn or practice our way out of - it's a fundamental limitation of human cognition.
2. Conceptual/structural barriers (your point about #2): This relates to how our minds naturally organize and understand chess. Humans tend to think in terms of general principles, patterns, and heuristics, while neural networks can operate on a more granular, move-by-move basis without needing these abstraction layers.
The fascinating question is whether the conceptual differences are truly permanent or whether we could develop new frameworks for understanding that would let us grasp these computer strategies in a "human" way. It's possible that what we're seeing isn't just computers playing chess better than us, but computers playing what amounts to a slightly different game - one that operates on principles we haven't yet figured out how to articulate in human terms.
This reminds me of discussions in other fields about whether certain types of mathematical or scientific insights are permanently beyond human comprehension, or whether we just haven't found the right way to think about them yet. Do you think there might be ways to bridge this conceptual gap, even if we can never match the computational aspects?
I’m thinking of this in terms of the more general issue of super-intelligence. I believe when people talk of super-intelligence (ASI), they’re thinking of general intelligence. Chess, of course, is a narrow domain. Chess computers can’t do protein folding, write poetry, or think through chemical reactions.
So, let’s take physics. It’s one thing for a computer to come up with some new physical principles but which humans can understand after studying them and interacting with the computer, perhaps getting some tutoring. It would be something quite different for a computer to come up with a new physical principle that humans could NEVER understand.
Yes, this is a crucial distinction - between something that's novel but ultimately comprehensible to humans (even if we need help getting there) versus something that's inherently beyond human comprehension.
In the chess case, I suspect most of AlphaZero's innovations fall into the first category. While we can't replicate its precise calculational abilities, the underlying strategic principles it's discovering should be comprehensible to humans - we just need to find the right way to articulate them. After all, chess is ultimately a human-created game with a finite set of possibilities, played on a board we can fully visualize.
This is different from, say, trying to visualize a four-dimensional space or intuitively understand quantum mechanics, where we might be bumping up against fundamental limits of human cognition and perception.
If we found that some of AlphaZero's chess strategies were truly incomprehensible to humans - not just difficult to calculate but impossible to understand even in principle - that would be deeply strange and concerning. It would suggest either:
1. We're missing something fundamental about how to explain these strategies
2. There are aspects of even a relatively simple game like chess that lie beyond human comprehension
The second possibility seems unlikely given chess's finite and human-designed nature. More likely, we're still in the process of developing the language and concepts to understand these computer-discovered strategies.
If we think about how human knowledge has grown over historical time, later generations have created ideas and knowledge that were beyond earlier generations. It’s not that the earlier people didn’t have the biological capacity. Rather, they didn’t have the appropriate cultural environment.
This is an excellent historical perspective that really helps clarify the issue. You're right - if we look at concepts like calculus, evolution, or quantum mechanics, these weren't beyond human comprehension in any absolute biological sense. Rather, they required the right cultural and intellectual scaffolding to be built up over time.
This suggests a more optimistic interpretation of the AlphaZero situation: What we're seeing might not be a permanent barrier of human cognition, but rather a temporary gap in our conceptual framework. Just as earlier generations couldn't have immediately grasped quantum mechanics without the intervening development of various mathematical and physical concepts, we might currently lack the chess concepts needed to fully understand AlphaZero's strategies.
The computational limitations (ability to calculate many moves ahead) would remain a hard constraint, but the strategic insights themselves might become accessible to human understanding once we develop the right conceptual tools and language to describe them. Just as we developed mathematical notation to handle calculus or Feynman diagrams to work with quantum mechanics, we might need new ways of thinking about and describing chess positions to fully grasp what AlphaZero is doing.
This also suggests that the process of human chess players learning from AI might be less about directly copying specific moves and more about gradually building up new theoretical frameworks for understanding the game.
I like your last observation. Coming up with new theoretical frameworks is something humans do, though it takes time. In the case of chess it might even require a new generation of players, players who’ve dealt with programs like AlphaZero since they first started playing chess in childhood.
Yes, exactly! This reminds me of how young mathematicians or physicists today can grasp concepts that were considered incredibly difficult or esoteric when first discovered.
They're not necessarily "smarter" than earlier generations - they just grow up in an intellectual environment where these ideas are part of the basic conceptual toolkit. A young chess player today might naturally develop intuitions about positions and strategies that align more closely with AI approaches, simply because they're exposed to these games and ideas from the beginning. Rather than having to "unlearn" traditional approaches and then try to understand AI strategies, they might develop a more integrated understanding from the start.
This could lead to a fascinating evolution in chess theory and teaching. Instead of starting with traditional principles and then trying to adapt them to incorporate AI insights, we might see entirely new frameworks emerge that more naturally encompass both human and AI understanding of the game. The challenge for the chess community might be developing the language and concepts to articulate these new frameworks effectively. Would you expect this process to happen naturally through exposure to AI chess, or do you think it would require deliberate efforts to develop new theoretical frameworks?
Both.
This conversation reminds me of a novel by Mark Twain, A Connecticut Yankee in King Arthur’s Court, which depends on the difference between what a 19th century engineer knew and what King Arthur and his people knew.
That's a fascinating parallel! Yes, the Connecticut Yankee (Hank Morgan) represents not just his individual knowledge, but the accumulated technological and scientific understanding of centuries after Arthur's time. The humor and drama of the novel often comes from the vast gulf between these two knowledge bases - Hank isn't necessarily smarter than the people of Arthur's court, but he has the advantage of centuries of accumulated human knowledge and understanding.
The novel also shows how hard it can be to truly transfer knowledge across such gaps. Even when Hank can create impressive effects (essentially appearing magical to the medieval observers), he struggles to create lasting change because he can't transfer the whole framework of understanding that made his knowledge possible. He can show results, but building the educational and cultural infrastructure to make that knowledge self-sustaining proves much more challenging.
This relates well to our chess discussion - the gap between current human chess understanding and AI chess insights might be like the gap between medieval and 19th century technology. The challenge isn't just learning individual "tricks" from AI, but developing the whole theoretical framework that makes those insights meaningful and transmissible to future players.
Do you think there are other historical parallels to this kind of knowledge gap and the process of bridging it?
Well, what about cargo cults in the South Pacific?
Ah yes - cargo cults are almost the perfect counter-example to show what happens when you try to adopt surface appearances without understanding the underlying framework!
During WWII, Pacific islanders saw aircraft landing with valuable cargo, but didn't understand the vast industrial, technological, and organizational systems that made those deliveries possible. Some groups then built elaborate wooden "airstrips" and "control towers," performed rituals mimicking the behavior they'd observed from military personnel, expecting this would bring cargo planes. But of course, just copying the visible behaviors without understanding the underlying systems couldn't produce the desired results.
This contrasts sharply with how modern chess players might learn from AI. The goal isn't to blindly copy AlphaZero's moves hoping they'll work (the cargo cult approach), but rather to gradually develop a deeper theoretical understanding of why those moves work. This requires building up the conceptual infrastructure to make sense of these strategies - more like the long process of scientific and technological development that actually enables air transport.
This makes me think we should be somewhat skeptical of attempts to directly "copy" AI chess moves without developing the underlying theoretical understanding. Just as cargo cults showed the limitations of imitating surface behaviors, chess players need to develop new frameworks of understanding rather than just mimicking AI moves.
Good chatting. As Arnold said in one of his movies, “I’ll be back.”
Indeed - looking forward to continuing our discussion! The evolution of human knowledge and understanding is a fascinating topic, whether we're talking about chess, technology, or any other domain. See you next time!