An open letter to Alan Liu concerning the notion of a tabula rasa interpretation which he introduced, though not in his own person, in “The Meaning of the Digital Humanities” (PMLA 128, 2013, 409-423).Hi Alan,
You know, in a way Stanley Fish anticipated the notion of a tabula rasa interpretation way back in his 1973 essay, “What Is Stylistics and Why Are They Saying Such Terrible Things About It? (reprinted in Is There a Text in This Class?, which is where my page numbers come from). Fish takes on an article by the linguist Michael Halliday, remarking that Halliday has a considerable conceptual apparatus – an attribute of many modern linguistic theories, lots of categories and relationships, all tightly defined. After quoting a passage in which Halliday analysis a single sentence from Through the Looking Glass, Fish remarks (p. 80):
When a text is run through Halliday’s machine, it’s parts are first dissembled, then labeled, and finally recombined in their original form. The procedure is a complicated one, and it requires many operations, but the critic who performs them has finally done nothing at all.Now, though I am familiar with some of Halliday’s work, I’ve not yet read that particular essay. Still, Fish’s characterization seems fair, and would apply to many similar and even not-so-similar models. Note, however, that he frames Halliday’s essay as one of many lured on by “the promise of an automatic interpretive procedure” (p. 78).
That, it seems to me, is the tabula rasa interpretation which you see as the goal of a least some digital critics. To be sure, Halliday did his work manually, but by that time the computer was very much in the air. On the one hand, Chomsky’s linguistics was driven by the notion of an abstract computer, but also computer-based statistical stylistics was fairly well established and Fish also hacks away at some of that work.
But, as far as I can tell, and I’ve been thinking about this for a lllllloooonng time, with one odd exception, there is never going to be any such thing as an automatic interpretive procedure.
The exception first.
If you really want a computer to crank out interpretations, readings, untouched by human hands, then you’re going to have to program and train the computer to simulate such a human – something David Hays and I imagined in our 1976 essay, Computational Linguistics and the Humanist (Computers and the Humanities Vol. 10, 1976: 265-274). Just how that’s to be done I don’t know, but I think the general idea would be to hand-code basic (simulated) human functionality and then train and teach your golem critic the rest. How would we do that? Most likely in some approximation to the ways that by which we train students to write interpretations.
And that implies that what this “automatic interpretive procedure” is going to crank out are approximations to the various symbolic, deconstructive, psychoanalytic, Marxist, feminist, etc. interpretations we’ve already got in abundance. And those approximations will be subject to (approximations to) the same limitations and failures, the same mediated finitude, as our human readings exhibit.
What a let down! All that work and we’re back where we started. Well not quite. We made the golem. We know how it works. Shades of Vico! Verum factum.
Let’s set that aside.
For I don’t see that happening any time soon. When I hatched that fantasy I figured it would take 20 or so years of realize, but Hays and I didn’t put that guess into the paper. Back in those days – I was a graduate student – I was always pestering Hays for estimates as to when this or that thing would be possible. And Hays would always refuse to provide such estimates, regarding them as foolish.
Hays had been with computational linguistics from the beginning, when it wasn’t called that. Rather, it was a specific task, machine translation. You feed the computer a text in one language, say Russian, and it cranks out an interpretation in another language, say English. In the 50s and 60s the Federal Government put a lot of money into that effort and then pulled the plug on the funding. It wasn’t getting enough bang for those Federal bucks.
What had happened with Hays and others, however, is that they’d learned a lot, a lot about language and a lot about computing. Things they couldn’t have learned without making the effort to crank out translations by machine. That effort changed their imagination, their sense of the thinkable. And so they continued on in those new directions, but without generous government funding.
The funding is neither here nor there. The failure, the learning, and the change of imagined possibility, that’s important.
Getting back to that tabula rasa automatic interpretive procedure, as I said, it’s not gonna’ happen. What IS happening now is that we’ve got an array of computational procedures that we can apply to texts, singly, in handful-sized batches, or by the thousands, and those procedures yield results. It’s up to us to interpret those results in a meaningful way. As you know, it sometimes takes a bit of work to put results in a form that we can interpret. There’s a lot of work on visualization techniques, and a lot of playing around.
As for interpreting those results in a meaningful way, what’s your pleasure? We’ve got Moretti interpreting some of his results into Wallerstein’s world-systems theory; Heuser and LeKhac interpreting into Raymond Williams; I vaguely recall, but cannot cite, a recent paper looking at 100s of versions of “Little Red Riding Hood” in cultural evolutionary terms. How we interpret these results is up to us. No doubt we will soon be creating new frameworks into which we interpret these strange conceptual objects we’re creating.
This is a brave new world, no? We set out to find the elusive tabula rasa interpretation, the unmediated paradisiacal vision, and we end up in a place we’d never imagined and can scarcely comprehend. I’d say we’re doing pretty well.
With regards,
Bill B