Wolfram on Machine Learning

By Bbenzon @bbenzon

Wolfram has a post in which he reflects on the work he's done in the last five years: Five Most Productive Years: What Happened and What's Nex t. On ChatGPT:

So at the beginning of February 2023 I decided it'd be better for me just to write down once and for all what I knew. It took a little over a week [...]-and then I had an "explainer" (that ran altogether to 76 pages) of ChatGPT.

Partly it talked in general about how machine learning and neural nets work, and how ChatGPT in particular works. But what a lot of people wanted to know was not "how" but "why" ChatGPT works. Why was something like that possible? Well, in effect ChatGPT was showing us a new science discovery-about language. Everyone knows that there's a certain syntactic grammar of language-like that, in English, sentences typically have the form noun-verb-noun. But what ChatGPT was showing us is that there's also a semantic grammar -some pattern of rules for what words can be put together and make sense.

My version of "semantic grammar" is the so-called " great chain of being," which is about conceptual ontology, roughly: "rules for what words can be put together and make sense." Here's a post where I discuss it on the context of Wolfram's work: Stephen Wolfram is looking for "semantic grammar" and "semantic laws of motion" [Great Chain of Being].

A bit later Wolfram says a bit more about what he's recently discovered about the "essence of machine learning":

So just a few weeks ago, starting with ideas from the biological evolution project, and mixing in some things I tried back in 1985, I decided to embark on exploring minimal models of machine learning. I just posted the results last week. And, yes, one seems to be able to see the essence of machine learning in systems vastly simpler than neural nets. In these systems one can visualize what's going on-and it's basically a story of finding ways to put together lumps of irreducible computation to do the tasks we want. Like stones one might pick up off the ground to put together into a stone wall, one gets something that works, but there's no reason for there to be any understandable structure to it.

And the future? Among other things: "symbolic discourse language":

But finally there was blockchain, and with it, smart contracts. And around 2015 I started thinking about how one might represent contracts in general not in legalese but in some precise computational way. And the result was that I began to crispen my ideas about what I called "symbolic discourse language". I thought about how this might relate to questions like a "constitution for AIs" and so on. But I never quite got around to actually starting to design the specifics of the symbolic discourse language.

But then along came LLMs, together with my theory that their success had to do with a "semantic grammar" of language. And finally now we've launched a serious project to build a symbolic discourse language. And, yes, it's a difficult language design problem, deeply entangled with a whole range of foundational issues in philosophy. But as, by now at least, the world's most experienced language designer (for better or worse), I feel a responsibility to try to do it.

In addition to language design, there's also the question of making all the various "symbolic calculi" that describe in appropriately coarse terms the operation of the world. Calculi of motion. Calculi of life (eating, dying, etc.). Calculi of human desires. Etc. As well as calculi that are directly supported by the computation and knowledge in the Wolfram Language.

And just as LLMs can provide a kind of conversational linguistic interface to the Wolfram Language, one can expect them also to do this to our symbolic discourse language. So the pattern will be similar to what it is for Wolfram Language: the symbolic discourse language will provide a formal and (at least within its purview) correct underpinning for the LLM. It may lose the poetry of language that the LLM handles. But from the outset it'll get its reasoning straight.

The symbolic discourse language is a broad project. But in some sense breadth is what I have specialized in. Because that's what's needed to build out the Wolfram Language, and that's what's needed in my efforts to pull together the foundations of so many fields.