Culture Magazine

On the Explicit Construction of Cognitive Ontology: From “salt” to “sodium Chloride”

By Bbenzon @bbenzon

I have long used the conceptual difference between “salt” and “sodium chloride” to illustrate the idea of conceptual ontology. Except for impurities in (samples of) salt, they are the same thing. But conceptually they are quite different. Salt hardly needs any formal definition at all; it's a basic taste and a common physical substance. But sodium chloride is expressed in conceptual terms that weren’t fully developed until the 19th century.

Lately I’ve been wondering what would be required to develop cognitive accounts ontological concepts in a rich and full way. I’ve been talking about conceptual ontology using the idea of the Great Chain of Being. But I’ve always thought of that as a stand-in for a more thorough treatment, one I’ve never gotten around to. What would that more thorough treatment be like?

It's fairly obvious what we need to do with salt. It’s a white granular substance. We know how to do that sort of thing with tools invented back in the 1970s and 1980s. Nor should there by much difficulty it explicitly accounting for texture, taste, and whatever odor there is. But what about sodium chloride?

That’s not so clear. Oh, there’s been lots of work on formal ontologies for informatic purposes. John Sowa has worked on this, and Barry Smith’s website has lots of material. But that’s not quite what I had in mind.

For example, chemical experimentation typically involves weighing substances very carefully. In the 18th and 19th centuries they might have used a mechanical analytical balance something like this one:

On the explicit construction of cognitive ontology: From “salt” to “sodium chloride”

Photo by Sarcyn, licensed under a CCA by-SA 3.0 Unported License.

Such balances would have been used in the experiments used to identify the chemical elements, such as sodium and chlorine, identified in modern atomic theory. Since that is the case, the construction and operation of such balances is part of the conceptual web that supports the concept, /chemical element/, as is the mathematical used in analyzing these experiments. I wouldn’t expect that construction and operation to be directly implicated in the definition of chemical element, but there would be an explicit traceable linkage between the definition and that constructure and those operations. There would also be traceable links to reports in formal journals. Those reports would have specific weights and calculations, etc.

THAT’s the kind of thing I have in mind when I talk about a “thorough treatment” of conceptual ontology. On the one hand we have the sensorimotor processes involved in make observations and conducting experiments. That’s at the bottom layer, if you will, the foundation, of this cognitive constructure. Those objects and processes are going to be bound into complex patterns over which abstractions are made and those abstractions will end up as the terms directly involved in, in this case, 19th century atomic theory and its elaboration in chemistry.

I’m pretty sure that, if you ask your favorite chatbot about these things, it will tell you about salt, sodium chloride, sodium and chlorine, atoms and elements, analytical balances, solutions, gases, arithmetic, and so forth and so on. All of that stuff is there. But I haven’t the foggiest idea of what kinds of connections are latent in the model. It is by no means that all of the connections implied in my previous paragraph would be there in LLMs.

Thus, when I talk about LLMs as digital wilderness, I am implying that it is there to be explored, mapped, and ultimately “domesticated.” What do I mean by domestication? I mean development a rich and full symbolic cognitive account of some intellectual domain. In order to do that, we’re going to need to know how LLMs work internally. That’s just the beginning. I figure different intellectual communities will take responsibility for different regions of the digital wilderness. Getting the whole thing domesticated? That’s the work of intellectual generations. The idea that one day we’ll achieve the magical AGI which will then lead to AI-takeoff in which everything is all worked out in a matter of hours, days, weeks, or months at the most, that’s pure foolishness.


Back to Featured Articles on Logo Paperblog