Culture Magazine

Of Lit Crit “Stars” and AI “Godfathers” – In What Way is Geoffrey Hinton Like Jacques Derrida?

By Bbenzon @bbenzon

Back in 1997 David Shumway published “The Star System in Literary Studies” in PMLA. He begins with a paragraph about George Lyman Kittredge, of Harvard’s English Department at the end of the 19th and beginning of the 20th century, noting that Kittredge was unknown to the public. Here’s the first sentence of his second paragraph:

Kittredge, who virtually founded Chaucer studies in the United States, stood at the head of a professional genealogy that controlled the field for many years after his death, but he was not a star. Nor were any of his illustrious contemporaries or near contemporaries, such as John Manly, John Livingston Lowes, and so on. Why they were not stars and Judith Butler, Jacques Derrida, Stanley Fish, Henry Louis Gates, Jr., Fredric Jameson, Gayatri Chakravorty Spivak, and other figures in the academy are is the subject of this essay.

What I’m wondering is whether or not the so-called AI “Godfathers” don’t represent a similar phenomenon in contemporary AI. Strictly speaking I believe the Godfather term applies to the three winners of the 2018 Turing Award, Yoshua Bengio, Yann Lecun, and Geoffrey Hinton, but I believe there are others in AI with a similar status, such as Ilya Sutskever, Hinton’s student and co-founder of OpenAI, Andrej Karpathy, the former director of AI at Tesla who just made waves, albeit little ones, by resigning from OpenAI, Demis Hassibis, cofounder of DeepMind, and perhaps even such figures as Nick Bostrom and Eliezer Yudkowsky, who aren’t AI researchers but are highly influential figures through their commentary. Perhaps Sam Altman, the heroic CEO who fought off a recalcitrant board, is a star as well.

But first let’s get back to literary criticism. Shumway notes that there have been literary scholars in the past (relative to 1997) who were powerful and influential and who “probably received disproportionate recognition for their contributions compared with that accorded less well known scholars for comparable work.” The lit crit stars, whom he analogizes to movie stars (hence the term), are a product of the last quarter of the century. He dates the public emergence of these stars to a 1987 New York Times Magazine profile of the “Yale Critics,” the so-called “Yale Mafia,” of Harold Bloom, Geoffrey Hartman, J. Hillis Miller, and Jacques Derrida. He goes on to note that “The star system in literary studies, like that of the studio era, involves identification with a person who represents an ideal.”

Most of these critical stars are identified with capital-T Theory, a catchall term for the variety of schools of thought that emerged in the last three decades of the century. Harold Bloom, himself a star, nonetheless came to separate himself from the rest, categorizing them as the School of Resentment. [This separation, by the way, seems anti-mimetic, noting that Girard himself was such a star.] In a crucial passage, Shumway notes:

Theory not only gave its most influential practitioners a broad professional audience but also cast them as a new sort of author. Theorists asserted an authority more personal than that of literary historians or even critics. As we have seen, the rhetoric of literary history denied personal authority; in principle, even Kittredge was just another contributor to the edifice of knowledge. Criticism was able to enter the academy only by claiming objectivity for itself, so academic critics could not revel in personal idiosyncrasy. They developed their own critical perspectives, to be sure, but all the while they continued to appeal to the text as the highest authority. In the past twenty years theory has undermined the authority of the text and of the author and replaced it with the authority of systems...

Note the word “author” at the end of that sentence. Remember, Shumway is a literary critic writing about literary criticism. In that field, the primary and most important authors are the creators of the literary works that the field tends to (by editorial work and creating critical editions) and studies. All of a sudden these lit crit stars are up there in the firmament with Dickins, Sappho, Dante, Austen, Faulkner, and – gasp! – the Blessed Bard his-own Bad Self. That’s the kind of authority they have.

Shumway then notes: “Because authority in the natural sciences is rooted in a consensus about such norms, the hierarchies in these fields have not developed into star systems of the sort I have described here.” That brings us to AI, which is not a science, though it includes some forms of scientific knowledge within its scope. It is an engineering discipline that lives or dies on what it builds.

The field is attempting to create computer systems that are as “intelligent” as human beings across a wide range of tasks. But the concept of “intelligence” is difficult to define, as is the idea of AGI (artificial general intelligence). It has created remarkable and dazzling technology for language and images. But the technology has a “black box” aspect that has so far resisted analysis. We don’t know how it works. Nor do we know how to assess its performance or to project performance into the future.

Concerning the rise of literary stars, Shumway noted: “As theory has called into question the traditional means by which knowledge has been authorized, it may be that the construction of the individual personality has become an epistemological necessity.” That seems like the state of AI today. We’ve got a very complicated technology involving a blend of engineering, science, and alchemy, lacking objective knowledge. Note only that, the technology is enormously important and will change the way we live. In the absence of objective knowledge, what choice do we have but to steer by the freakin' stars?

* * * * *

These remarks were occasioned by the deference Nathan Gardels gave to Geoffrey Hinton in the current issue of Noema:

Beyond the avid venture capitalists and digital giants promoting the rapid commercialization of generative AI in all its promise, more sober and critical voices, not least the pioneers of the very technology among them, worry that it can become an “existential threat to humanity.” But few of those in the know ever explain, in lay terms you and I might understand if we try, what that actually means and how it may come about.

Considered the “godfather of AI,” Geoffrey Hinton is more in the know than most — and thus more concerned than most over the dangers of fostering superintelligence smarter than we can ever be. When OpenAI’s ChatGPT4 was released last year, he experienced an “epiphany” that led him to defect from his research post at Google, expressing regret over much of his life’s work.

In the Romanes Lecture delivered at Oxford University last week, Hinton explained the logic of his fears with the same step-by-step rigor by which he helped devise the early artificial neural networks that are the foundation of the superintelligence that so concerns him.

On the first highlighted passage: And we are so very lucky that the Great Man has taken time out of his busy schedule to tell us what’s on his mind.

On the second highlighted passage: I’ve not watched this particular video, but I’ve seen other recent performances by Hinton and I do not hold out high hopes for the rigor of this effort. For my opinion of Hinton on such matters, see the section, “The Experts Speak for Themselves,” in my recent 3QD piece, Aye Aye, Cap’n! Investing in AI is like buying shares in a whaling voyage captained by a man who knows all about ships and little about whales

Beyond that, “step-by-step” is not how’d I’d characterize research in AI – or any other field for that matter. Yes, there is rigor, but there’s also chance and dumb luck. None of this has come about through a carefully executed plan in pursuit of a well-defined goal. “Step-by-step” is wishful thinking; it is star-struck.


Back to Featured Articles on Logo Paperblog