Culture Magazine

Some Informal Remarks on Jockers’ 3300 Node Graph: Part 2, Structure and Computational Process [#DH]

By Bbenzon @bbenzon
My previous note was about time and evolution. This one is about mechanism. And like the previous note, it is also about intuition – though I didn’t frame that note in that way. When I’d thought about Jockers’ graph just a bit, I decided it betokened an evolutionary process. That decision reflected an intuitive judgment. It’s not something I reasoned out, it’s something I saw, if you will. It appeared before. Once that intuition had formed, I set about rationalizing it.
This note is about how my early immersion in computational semantics guides my thinking in, well, in many things. I turned to computational semantics when I’d thrown everything I had in the way of literary theory, such as it existed before one talked of Theory with a capital “T”, plus a few other things (Piaget, Merleau-Ponty, Nietzsche, Wittgenstein) at “Kubla Khan” and had it fall apart. If there was a way forward, I thought, computation would be it.
But let’s set that story aside for the moment; we’re return to it later. I want to open by talking about what I believe to be the most immediate effect my computational background had on my perception of Jockers’ graph: I saw it as a manifestation of a process. Then I’ll talk about the broader effects of that experience on my approach to literary criticism.
Diagrams and process
The computational semantics I studied under David Hays at the State University of New York at Buffalo (SUNYAB, or just UB) was and is quite different from anything in computational criticism, though it is perhaps a little like work using vector semantics, but only a little. Of course semantics is only part of such a model, which must also include morphology, syntax, pragmatics, discourse, and speech synthesis and hearing, on the one hand, and character recognition on the other (generation streams of characters is trivial). The objectives of computational critics vary among investigators and from one investigation to another, but no one seeks to the model linguistic processes of reading and writing, listening and talking. Computational criticism isn’t trying to understand language mechanisms at all, not at the level of phrases and sentences and not, I’d argue, at the level of whole texts either.
How does one create such models? Techniques vary, a lot, but the range of techniques is secondary to this discussion, which is about what I’ve brought with me to my understanding of computational criticism in general, and Jockers’ graph in particular. What I’ve brought is a great deal of experience in working with graphs as models of mental processes. Here’s a fragment of the semantic model I developed while working on Shakespeare’s Sonnet 129:
Some informal remarks on Jockers’ 3300 node graph: Part 2, structure and computational process [#DH]
The graph is quite different from Jockers’ graph. For one thing it has fewer nodes, by a considerable margin. But it is otherwise more complex. The nodes in Jockers’ graph represent the same kind of object, a text, and the edges between them are of the same kind, proximity in space. The nodes in that semantic network are of various kinds – objects, events, properties of objects or events, some even represent whole bundles of objects and events ¬– as are the edges. And the space in which a semantic network is embedded has no metric associated with it; the physical distance between nodes is a mere diagrammatic convenience and has no formal significance. Taken together these various kinds of nodes and edges can be used to specify processes in the network. That’s the crucial point, semantic networks may be depicted as static objects on a page, just as one may depict an clock mechanism as a static object, but they function in, are designed to function in, linguistic processes.
That’s what I brought with me to Jockers’ graph, the concept of a graph that embodies or supports some a process. By itself the graph would not have activated that concept, but when I read that the graph ordered the nodes in rough chronological order despite the fact that there was no temporal information in the underlying database, THAT told me there’s a process at work in that graph. I judged the process to be an evolutionary one – what else could it be – and began thinking about it.
Yes, I know, a large scale evolutionary process is very different from a micro scale linguistic process, but these diagrams are very abstract objects. At a high enough level of abstraction a network is a network and a process is a process. Moreover, as I indicated in my earlier remarks on “generic time trends”, I’ve been thinking about evolutionary processes as long, if not longer than I’ve been thinking about linguistic processes. It was thus all but inevitable and natural that I would read that graph as the trace of a process unfolding in time, an evolutionary process.
My break with 'traditional' literary criticism
But why, with an background in literary criticism, did I turn to such strange conceptual objects in the first place? As I’ve indicated in my introduction, I had become interested in “Kubla Khan”. I set out to do a structuralist analysis of the poem – this was before structuralism had more or less fallen apart within the literary academy – and it didn’t work. It’s not that I could find binary oppositions in the poem. I could. They’re all over the place – Kubla vs. wailing woman, Kubla vs. damsel with a dulcimer, pleasure dome vs. caves, sound vs. sight, inspired poet vs. those who hear and see, ice vs. Paradise, and on and on – and that was a problem. I couldn’t see any ‘narrative’ order in the profusion of oppositions.
This is not the place to give a blow-by-blow account of what happened, I’ve done that elsewhere [1]. Suffice it say that I’d discovered that the poem had a structure that could be diagrammed like this (first 36 lines):
1 tree
Those nested ternary structures (in red) looked like, smelled like, computation at work. By the time I’d gotten that far in my analytic and descriptive work on the poem I’d become aware of a variety of work in the nascent cognitive sciences – the phrase “cognitive science” wasn’t coined until 1973, after I’d done my initial work on “Kubla Khan” – and so I turned to them.
What I got was a new and I believe quite a valuable way of thinking about language and mental processes, but one not quite up to satisfying my curiosity about “Kubla Khan”. Nonetheless I couldn’t look back, I couldn’t unlearn what I’d learned and thereby return to a more naïve approach to literary criticism. And yes, I regard standard literary criticism, to the extent that there is such a thing, up through new historicism and post structuralist approaches, as naïve, and a bit confused as well [2]. The text is a crucial notion; is there any consensus on what constitutes the text? No. And the same with form, another critical concept about which there is no critical consensus.
The upshot is that I am a native reader and writer of two different discourses focused on language, literary criticism and cognitive science. I remain comfortable with reading a wide variety of literary criticism, and I can write it, at least up to a point. But I can also read and write cognitive science and do so. I find that, for the most part, the world of computational criticism is commensurate with that of cognitive science. To use a crude geographical metaphor, think of the North America as the New World. I’ve spent most of my time, say, exploring the territory along the East Coast and through the Midwest to the Mississippi River. That’s where I’d met these computational critics, who’d come up through Central America and along the Rocky Mountains to the plains states. So, it’s a different kind of territory, but still on the same continent. We’re doing the same kind of thing. Standard literary criticism, on the other hand, that’s the Old World. I’ve been there, they’ve been there, we’ve all been there, but the New World is where we function best.
Crude, yes, but serviceable. Let’s drop the analogy and take a brief look at a problem that’s been much discussed in the recent past, though such discussion seems to have subsided. I’m talking of the problem of scale, of so-called distant reading vs. so-called close reading.
From my point of view the issue is miscast. Scale, so far as I know, is not an issue in biology, where they’ve got to deal with individual cells and their components and the evolution of life on earth as well. That’s two very different scales of analysis, but the terms in with the analysis is conducted are mutually commensurate across all scales. The situation in literary criticism is not so clear. The terms of analysis used in computational criticism are quite different from those in any of the standard schools of critical reading, from New Criticism on through the various forms of post-structuralism. Many proponents of close-reading regard the terms of computational criticism as absolutely incommensurate with close-reading and hence computational criticism is either wrong or trades in trivial truisms [3]. Computational critics see it differently. Some may well reject close-reading across the board, though I’ve not seen that position publically articulated. Others admit, yes, the terms are different, but we’re ultimately looking at the same objects, literary texts. It’s just that we’re looking at them in somewhat different ways and, yes, at different scales. But, as I said, this talk of scales is beside the point. It’s those different ways that matter.
For me, there is no issue of scale. I learned computational semantics as a tool to use at the micro scale. I’ve also developed an approach to the analysis and description of form at the micro scale [4]. These concepts are perfectly compatible with those of computational criticism. Their relationship to ordinary “close-reading”, however, that’s problematic. And it’s problematic in the same way that computational criticism itself is problematic.
That issue is more than I want to address in this (relatively) short and informal note. Basically, I think those critics must explicitly embrace aesthetic and ethical values in a full-blown and explicit ethical criticism, which is beyond computational criticism at whatever scale, macro (as in so-called distant reading) or micro (as in computational semantics or descriptive analysis of form). I’ve written a fair bit about ethical criticism here on New Savanna but have yet to formulate a central statement [5].
[1] There’s an autobiographical account in William Benzon, Touchstones • Strange Encounters • Strange Poems • the beginning of an intellectual life (1975-2015)
For a more conceptual account, see my working paper,
Beyond Lévi-Strauss on Myth: Objectification, Computation, and Cognition (2015),évi-Strauss_on_Myth_Objectification_Computation_and_Cognition. In particular, see section 4, “Into Lévi-Strauss and out through ‘Kubla Khan’”, pp. 20-27.
[2] On my general skepticism about literary criticsm, see my long blog post, Literary Studies from a Martian Point of View: An Open Letter to Charlie Altieri (December 17, 2915),  my working paper, An Open Letter to Dan Everett about Literary Criticism (February 19, 2017), (PDF), and Rejected! @ New Literary History, with observations about the discipline (February 28, 2017)
[3] That seems to be the position taken by Nan Z. Da, The Computational Case against Computational Literary Studies, Critical Inquiry 45, Spring 2019, 601-639.
[4] For a methodological and programmatic statement, see William Benzon, Literary Morphology: Nine Propositions in a Naturalist Theory of Form, PsyArt: An Online Journal for the Psychological Study of the Arts, August 2006, Article 060608, For a methodological statement about description see, William Benzon, Description 3: The Primacy of Visualization, Working Paper, October 2015, 48 pp.,
[5] For example, see my post Ethical Criticism: Blakey Vermeule on Theory, Cornel West in the Academy, Now What?, September 23, 2015, More generally, see the posts gathered under the label, “ethical criticism”,

Back to Featured Articles on Logo Paperblog