Culture Magazine

GPT-3 Has Rendered Alan Turing 1950 Obsolete. We Need to Enter a World He Could Not Have Imagined.

By Bbenzon @bbenzon
GPT-3 has rendered Alan Turing 1950 obsolete. We need to enter a world he could not have imagined.

And by that I do not mean that GPT-3 has made the so-called Turing Test – which Turing called the Imitation Game (Computing Machinery and Intelligence, 1950) – obsolete. It’s my understanding that philosophers have long since found it to be useless; Weizenbaum’s ELIZA but the kibosh on it back in the mid-1960s. No, I mean something a bit different, and more difficult to conceptualize.

In that 1950 paper Turing gave expression to an old idea, a dream, that humankind should create an artificial human. Mary Shelley’s Dr. Frankenstein pursued one version of that idea. A much older version showed up in the title of a book by Norbert Weiner, God & Golem, Inc.: A Comment on Certain Points Where Cybernetics Impinges on Religion (1964). The imitation game was a device Turing proposed for determining whether or not a convincing simulacrum of humanity had been created. What GPT-3 has rendered obsolete is the intellectual drive to create that artefact. Paradoxically, it rendered that impetus obsolete by the very fact that it more convincingly passed or, if you will, outstripped, the Turing Test than any previous system had done.

Artificial intelligence originated as an enterprise devolved from the dream of creating, if not an artificial human, at least an artificial (human) mind. As such it issued prediction after prediction about when a computer would equal or surpass humans in some cognitive activity, chess most prominently, but everydamnthing else as well. The work that eventuated in GPT-3 is in that lineage, albeit based on technology quite different from that imagined by those worthies gathered at Dartmouth in that 1956 meeting.

But GPT-3 shocked everyone with its facility, even its creators (perhaps especially them, and they had intimations of things to come in GPT-2). It’s in that moment of shock that Turing 1950 was rendered obsolete. All that intellectual efforted had produced something that was at one and the same time: 1) unexpected, 2) too damn apparently human, and 3) thereby a fulfillment of the 1950 version of that ancient dream. It’s the unexpectedness of the success that’s so confounding.

That 1950 version of the dream of artificial humans was grounded in a certain system of thought – to use the phrase that Hays and I have employed in our theory of cultural ranks. That system of thought was capable to conceiving artificial neural nets, and of conceiving the transformer architecture. But it is not capable of understanding the language models produced by transformers and other artificial neural nets for that matter, but it’s GPT-3 that shocked everyone. And that is why I say it rendered Turing 1950 obsolete. We’re facing a different world now and we have to adjust our hopes and fears, our dreams and nightmares, accordingly.

That’s easier said than done. All the chaos that’s been occasioned by ChatGPT – but other systems too – is a manifestation of that process of adjustment, of profound cultural change. There is no guarantee of successful adaptation, nor, for that matter, is there some singular ideal form of successful adaptation.

One of the chief indices of our inability to comprehend these LLMs is the persistence of the idea that they are prediction machines – “stochastic parrots”, “autocomplete on steroids”. Prediction is a device, but it’s not the goal. The goal is a model, and the model is about the entanglement of meaning among words and texts, something I’ve discussed, Entanglement and intuition about words and meaning. The observation that these models are opaque, unintelligible, black boxes, is another index of the insufficiency of our current system of thought, of the system of thought that gave rise to the models in the first place. Yes, they are unintelligible, but that’s not a property inherent in the models themselves, like, e.g. the number of layers or parameters they have. It’s a property of the relationship between the models and some system of thought. The internal combustion engine would have been unintelligible to Aristotle or Archimedes, but that’s not because those engines are inherently unintelligible. Rather, Aristotle and Archimedes didn’t have a system of thought in which internal combustion engines could be understood.

Now, it’s one thing to observe that the models are unintelligible. It’s quite something else to believe/fear that that unintelligibility is inherent in them and, correlatively, in the human mind as well. It’s clear to me that some Doomers actively cultivate the notion that these models are unintelligible.

This opacity, this unintelligibility is not all of a sudden, a new phenomenon. It’s been with us ever since Frank Rosenblatt conceived of perceptrons, the first artificial neural nets. But it was not problematic in those older systems. It is the unexpected mimetic capacity of GPT-3 and later LLMs that has rendered that opacity problematic. It will remain problematic as long as we (insist upon continuing to) remain tethered to the system of thought which gave birth to these wonderful devices.

It's time to move on.


Back to Featured Articles on Logo Paperblog