From Wikipedia:
In computer science and information science, an ontology encompasses a representation, formal naming, and definition of the categories, properties, and relations between the concepts, data, and entities that substantiate one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of concepts and categories that represent the subject.
Every academic discipline or field creates ontologies to limit complexity and organize data into information and knowledge. Each uses ontological assumptions to frame explicit theories, research and applications. New ontologies may improve problem solving within that domain. Translating research papers within every field is a problem made easier when experts from different countries maintain a controlled vocabulary of jargon between each of their languages.
This differs from what philosophers have generally meant when they talk of ontology. They are concerned with what’s really real about the world, not how people think about the world. The study of conceptual ontology is about what people think, regardless of whether or not it is true?
But, you might be thinking, aren’t philosophers, even the best of them, only human beings and so their thoughts about ontologies are only thoughts? Ah...
Think of a set of building blocks, Lego pieces, Erector set components, or, for that matter, the various components that go into the construction of, say, actual buildings, whatever. There is a finite set of distinct different types of objects in these various collections. That set of types is your ontology. This set of types places constraints on what you can build. But what you can actually build depends on your imagination and determination, plus, of course, having enough tokens of each type to complete the job.
John Sowa’s top-level set of categories based on the work of Charles Sanders Pierce and Alfred North Whitehead. From his book, Knowledge Representation (2000):
He argues that that is the conceptual ontology fundamental to all human thought.
I’ve got my doubts about that, but don’t want to argue it here and now. There may well be some elements that are universal among humans, but I will say that each culture has its own underlying conceptual ontology. And ontologies change over the long duration, thus, for example, the ontology of 19th century chemistry is different from that of alchemy. More generally, when Thomas Kuhn talks about revolutionary science vs. normal science, he’s talking about regimes where the underlying ontology changes, revolutionary science, versus those where it remains unchanged.
My set of conceptual Lego pieces was complete by the time I completed my master’s thesis on “Kubla Khan” in 1972. It was rich enough that I was able to learn Hays’s computational semantics and, on that basis, imagine Prospero, the system that could “read” Shakespeare. When the possibility of actually constructing Prospero disappeared, the set of conceptual Lego pieces – my conceptual ontology – remained unchanged. But my sense of what one can build with those pieces changed.
When I began (email) conversations with Walter Freeman about the complex dynamics of the nervous system, I was able to do so with that set of conceptual Lego pieces (ontology) – though, keep in mind, I don’t command the underlying mathematics and so have to work analogy and metaphor. That same conceptual ontology has allowed me to conceive of attractor nets, networks of logical operators over attractors in various attractor landscapes, where each landscape corresponds to a neurofunctional area in the brain. When I began thinking seriously about deep learning and artificial neural nets, I did so in terms those ontological primitives. They allowed me to see, both that GPT-3 represents a conceptual advance, and that such technology is not sufficient in itself.
Remember, finally, that that conceptual ontology took shape through investigating the form and meaning of “Kubla Khan.” That ontology was ‘designed,’ if you will, to encompass a rich example of verbal artistry. It ranges over neurons, logical operators, poems, and more.
What has happened over the course of my career is the my sense of what can be built within this ontology has changed. Yes, I have had to drop Prospero and things ‘like’ it from the list, but I have added things to the list as well, such as the origins of human thought and attractor nets. On the whole, my sense is that the space of possible constructs has grown larger and more various.
For a more detailed look, see:
Ontology in Cognition, The Assignment Relation in the Great Chain of Being, Working Paper, November 12, 2012, https://www.academia.edu/37754574/Ontology_in_Cognition_The_Assignment_Relation_and_the_Great_Chain_of_Being
Ontology of Common Sense, in Hans Burkhardt and Barry Smith, eds. Handbook of Metaphysics and Ontology, Muenchen: Philosophia Verlag GmbH, 1991, pp. 159-161, https://www.academia.edu/28723042/Ontology_of_Common_Sense
Ontology in Knowledge Representation, Working Paper, 1987, https://www.academia.edu/238610/Ontology_in_Knowledge_Representation
Ontological Cognition, Working Paper, November 12, 2012, https://www.academia.edu/7931749/Ontological_Cognition