Culture Magazine

Trajectories in Story-telling Space [#DH, #Macroanalysis]

By Bbenzon @bbenzon
This is an explanatory supplement to On the direction of literary history: How should we interpret that 3300 node graph in Macroanalysis?
* * * * *
First we go about visualizing possible trajectories of an abstract particle moving about on a plane. Then we interpret those particles as successive states in a game of ‘whispers’ where A tells a story to B, B tells the story to C, and so forth. The story changes a bit with each telling. Finally, I explain what this has to do with the 3300 node graph in Mathew Jockers’ Macroanalysis.
Visualizing an abstract particle moving about on a plane
Let us imagine an abstract particle moving around on a plane. We are going to take a ‘snapshot’ of the particle at regular intervals and see if there is some lawfulness in its movement or it is just moving about without any particular order. Here we have six successive snapshots of our particle, one after the other, each one showing the particle’s location at a moment in time.
So, there’s where our particle starts:
Trajectories in story-telling space [#DH, #Macroanalysis]
It then moves to here:
Trajectories in story-telling space [#DH, #Macroanalysis]
Followed by:
Trajectories in story-telling space [#DH, #Macroanalysis]
And then:
Trajectories in story-telling space [#DH, #Macroanalysis]
Next comes:
Trajectories in story-telling space [#DH, #Macroanalysis]
And at last, the particle arrives here:
Trajectories in story-telling space [#DH, #Macroanalysis]
Examining the particle’s successive position like this is tricky. It’s difficult to get a sense of the particle’s path. let’s line those snapshots up in a row and see if that helps:
Trajectories in story-telling space [#DH, #Macroanalysis]
That helps some, but still, it’s hard to see what’s going on.
We need to superimpose these snapshots in order to see the path more clearly. So that we can be sure of their order, let us connect successive positions with an arrow where the direction of the arrow goes from the earlier to the later position. This is what we get:
Trajectories in story-telling space [#DH, #Macroanalysis]
While there doesn’t appear to be any order there, it looks like the particle’s may be confined to a hill-shaped area in the space. Let us take some more snapshots and superimpose them.
Trajectories in story-telling space [#DH, #Macroanalysis]
In the above image I have identified the particle’s first and last positions by making the dots red:
Still, no order, but the particle no longer seems confined to that hill-shaped area. It certainly doesn’t look like the particle is following any particular path.
But, of course, it might have worked out some other way. Like this perhaps:
Trajectories in story-telling space [#DH, #Macroanalysis]
There’s order there. The particle appears to be moving in a circular path. That means we could write an equation approximating the path.
Here is a different, and simpler, kind of order:
Trajectories in story-telling space [#DH, #Macroanalysis]
The particle is simply moving in an almost straight line with a small upward slope.
Let’s play whispers
Now let’s interpret each of those points as a story. It can be any story, of any kind, it doesn’t matter. Nor does it matter how long it is, but as a practical matter it should be pretty short, because we’re going to imagine people telling it to one another in succession. Frederick Bartlett reporting on this sort of thing in his classic, Remembering: A study in experimental and social psychology (1932), and others have worked on it since.
Let us further imagine that we have some way of measuring each version so that we can establish a measure of similarity between them. I note in passing, that since we’re heading toward Matthew Jockers’ Macroanalysis, we need not worry too much about this as he developed a very sophisticated measurement system for his texts. Finally, assume that our system is a simple one that measures the story in two dimensions.
Now, we need to realize that similarity has two aspects, magnitude and what I will call orientation. Magnitude is simple: Just how similar are they? Magnitude is given by a number. Orientation has to do with the nature of that similarity and I chose the term because we’re describing similarity in terms of positions in an abstract space. Consider this graph:
Trajectories in story-telling space [#DH, #Macroanalysis]
The blue, red, and orange dots are all roughly the same distance from the black dot, so the respective story versions (in our current interpretation of these diagrams) have the same magnitude of similarity to the black. But their orientations are different; the are similar to it in different ways. The green-dot story has roughly the same orientation with respect to the black-dot story as does the orange-dot story. But it is further way, and so the magnitude of similarity is less.
So, here we have one version of the whispers game:

Trajectories in story-telling space [#DH, #Macroanalysis]

Random story evolution

The red dot to the left represents the first story in the chain while the red dot to the right represents the last version in the chain. The degree of similarity is represented by the length of the arrows. There is some discernible difference in lengths indicating that some successive pairs are more like one another than others. But the orientations are all over the map, as it were, so that the chain as a whole does not show any coherent evolution in this space.
The situation is quite different for the next two story chains:

Trajectories in story-telling space [#DH, #Macroanalysis]

Coherent cyclic story evolution


Trajectories in story-telling space [#DH, #Macroanalysis]

Coherent linear story evolution

The orientations of the story versions in the first chain exhibit a cyclic path through the space while the versions in the second chain exhibit a linear path.
If these trajectories represented real data, we would certainly want to know why these three initial stories underwent such different evolutions in successive tellings. And that’s what Bartlett’s (and others) research is about I (though I don’t recall any research that uses this kind of analytic framework): How do stories, or whatever, change on successive tellings? It’s been so long since I read this material that I don’t remember much of it, though I do believe that one result is that stories of a kind familiar to the tellers change less than unfamiliar stories–which makes sense enough.
But what does this have to do with Jockers’ work?
Jockers’ graph of influence
Jockers created that graph to study how one author influenced another. His reasoning was simple: If one author influenced another, then their work should be similar. So he set out to measure the degree of similarity between the texts in his collection. Since he had already taken a great many measurements for each text he was able to produce a graph in which the nodes indicated texts and the length of the links between them gave us the degree of similarity. Here’s his graph:
Trajectories in story-telling space [#DH, #Macroanalysis]

His data is in 600 dimensions (that’s how many measurements he has for each text), but he’s projected the graph onto two dimensions so that we can visualize it. What he discovered, much to his surprise, is that the graph has positioned the texts in roughly temporal order, from left (1800) to right (1899). That is to say, later texts tended to be to the right of earlier texts. Their orientation may vary a bit, but generally they were to the right. Why? After all, there was no date information in the database. This temporal gradient reflects some pattern of information in the database, but it is by no means obvious what.
I don’t know what’s going on either, that is to say, I don’t know what accounts for the overall similarity in orientation that later points display with respect to their close prior neighbors. What interests me is simply the fact that this orientation similarity exists. I interpret that as meaning that the system has an inherent direction in time.
That is to say, I’m asserting that the system Jockers has observed (19th century Anglophone novels), exhibits a coherent linear trajectory of story evolution, as in the illustration above. But that illustration was concocted to depict what happens when a story is passed from one person to another in a chain. That is not what happens in Jockers’ system. How do we bridge the conceptual gap?
What Jockers has done is create a way of describing what we can call the stochastic form of a novel, a form that in this case has roughly 600 dimensions. It could be more, less, and many of the dimensions could be other than they are. All that matters is that we’ve got a lot of them. What he’s discovered – without actually looking for it – is that the stochastic form of 19th century novels evolves in a fairly coherent direction. It’s not completely coherent – he’s indicated there are texts that are ‘out of place’ in the graph – but mostly coherent. The fact that very likely none of these novels derives from another in the way that one telling of, say, “Little Red Riding Hood” drives from a previous telling, that’s irrelevant. We’re not interested in the identity of one text with another. We’re only interested in stochastic form. Where a text’s stochastic form comes from in any instance is irrelevant. All that matters is what that form is.
So, let’s take all the texts published in a single year and create an average stochastic form for that year. We take the average value along each dimension and treat the resulting values as the “average text” for that year. Now let’s graph those values against time. It should be a fairly straight line, perhaps a wobble here an there, but mostly straight. The straightness of that line is an index of how coherently the system evolves in time.

Back to Featured Articles on Logo Paperblog