I’ve been reading perhaps more than is healthy about such things as Superintelligence and AI takeoff (gradual or FOOM!) and am wondering whether or not the idea of superintelligence is a 21st Century equivalent of the Philosopher’s Stone (Arabic: ḥajar al-falāsifa, Latin: lapis philosophorum) of Olde. Superintelligence is the idea of a being, generally thought of as an AI of some kind, that is more intelligent that humans are. The Philosopher’s Stone is an alchemical belief in a substance that can transmute base metals into gold.
I dropped this concern into the Twitterverse last evening and Ted Underwood observed:
The parts of thinking that are clearly scalable—speed, parallel processing, and memory—are already superhuman in our laptops. But it doesn’t make our laptops evil masterminds.
— Ted Underwood 🇺🇦 (@Ted_Underwood) April 6, 2022
Good question, thought I to myself. What are those other parts of thinking?
This morning Ted came back with:
You sent me down a rabbit hole and I returned with this essay, which mostly persuades me. He may slightly understate the linear scalability of things like parallel processing. But that’s not what ppl think they mean by superhuman https://t.co/t08vyRoSDV
— Ted Underwood 🇺🇦 (@Ted_Underwood) April 6, 2022
So I took a look at that article, “The Myth of a Superhuman AI,” which is from 2017 and is by Kevin Kelly. Very interesting. Kelly sets the stage:
Yet buried in this scenario of a takeover of superhuman artificial intelligence are five assumptions which, when examined closely, are not based on any evidence. These claims might be true in the future, but there is no evidence to date to support them. The assumptions behind a superhuman intelligence arising soon are:
- Artificial intelligence is already getting smarter than us, at an exponential rate.
- We’ll make AIs into a general purpose intelligence, like our own.
- We can make human intelligence in silicon.
- Intelligence can be expanded without limit.
- Once we have exploding superintelligence it can solve most of our problems.
In contradistinction to this orthodoxy, I find the following five heresies to have more evidence to support them.
- Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
- Humans do not have general purpose minds, and neither will AIs.
- Emulation of human thinking in other media will be constrained by cost.
- Dimensions of intelligence are not infinite.
- Intelligences are only one factor in progress.
If the expectation of a superhuman AI takeover is built on five key assumptions that have no basis in evidence, then this idea is more akin to a religious belief — a myth.
I like that set of parallels a lot, a whole lot. I got me excited that rather than finish reading the article I decided to make this post.
What’s in question is the nature of the world: What kinds of things and processes exist now or could exist in the future? The alchemical idea of a philosopher’s stone is embedded in a network of ideas about the nature of physical reality, its objects, processes, and actions. The same for superintelligence. Superintelligence is about minds, brains, computers, and about the future.
I know very little about the history of alchemy, but I do know the no less a thinker than Isaac Newton took it quite seriously:
Of an estimated ten million words of writing in Newton's papers, about one million deal with alchemy. Many of Newton's writings on alchemy are copies of other manuscripts, with his own annotations. Alchemical texts mix artisanal knowledge with philosophical speculation, often hidden behind layers of wordplay, allegory, and imagery to protect craft secrets. Some of the content contained in Newton's papers could have been considered heretical by the church.
By the 19th Century, however, alchemy no longer held the attention of the most serious and venturesome thinkers. But it persists in popular culture, e.g. the Harry Potter universe, or Full Metal Alchemist.
That is to say, the idea of the philosopher’s stone didn’t disappear overnight. It was a gradual process, taking place over centuries, as the (so-called) scientific revolution radiated out from its earliest footholds in 16th Century astronomy and physics. Will the idea of artificial superintelligence undergo a similar process?
* * * * *
Question: Why are the prophets of Superintelligence more worried about the danger it might present to humanity than interested in the possibility that it will reveal to us the Secrets of the Universe? See my post from March 5, These bleeding-edge AI thinkers have little faith in human progress and seem to fear their own shadows.