(A condensed version of my Nov. 26 Albany Library book talk)
Historian Yuval Noah Harari’s 2024 book, Nexus: A Brief History of Information Networks From the Stone Age to AI, concerns how Artificial Intelligence will shape our future.
An AI is a computer program trained on a vast amount of information (usually scouring the internet), learning to see patterns, for use in carrying out tasks. For example, it gets good at reading X-rays. It can also write stuff — really by guessing each next appropriate word in a text sequence. But while an AI might seem to have understanding, that’s actually just a simulation of it. Not thinking as we humans think of thinking.
An AI evolving into a thinking being — becoming conscious — is probably a long way off. But that prospect is outside this book’s scope.
It’s greatly concerned with AI’s implications for the future of democratic societies, as against authoritarian systems. Harari relates that in the Nineteenth Century control of railroads, steam ships, and other industrial technology meant ruling the world, producing an era of colonialism. Now he foresees “data colonialism,” with control of data ruling the world — and AI being more powerful than those previous technologies.
A guiding metaphor is Goethe’s “cautionary tale,” The Sorcerer’s Apprentice. The young apprentice, tasked with fetching water, delegates that to an enchanted broomstick. Which doesn’t know when to stop, and the apprentice cannot stop it either. Result: flood. Harari states the lesson: “never summon powers you cannot control.”
Yet we do that a lot. Setting in motion powerful forces with unintended consequences. And the power is often entrusted to the wrong people. Like Germans did in 1933. (Or Americans in 2024.)
Harari thinks it’s really an information problem. We’ve often built “large networks by inventing and spreading fictions, fantasies, and mass delusions.” Thus we got Nazism, and Stalinism — “exceptionally powerful networks held together by exceptionally deluded ideas.” And while those ultimately failed, Harari fears some new totalitarian regime, AI-built, able to prevent exposure of what it’s doing.
He posits that our first information technology was the story. Its power needn’t depend on its truth. If anything, a false story can have the advantage of simplicity while truth can be complicated — and discomfiting.
The book says the backbone of much art and mythology comes from “biological dramas” that press our emotional buttons: Who will be alpha? (the siren of “strength”); us-versus-them; good versus evil; purity versus pollution. The latter particularly afflicts India, whose Hindu religion centralizes a caste system stigmatizing lower castes as impure. (Trumpism pushes all these primitivist buttons too.)
In Harari’s telling, a “naive view of information” assumes the antidote to error is more and better information. This was the belief at the information age’s onset. However, remember Gresham’s law, that bad money drives good money out of circulation. We’re seeing the information equivalent.
The book cites here the murderous witch hunting hysteria circa 1500-1700. Because Europe was flooded with information (spread by a new invention, the printing press) about a vast Satanic witching conspiracy. “Information,” you see, need not be truthful. Something people can exploit for wealth and power, a big factor in the witch hunts.
Harari similarly sees today’s burgeoning populist movements as information-related. People feeling themselves entitled to “their own truth” as against opponents. Noted is the “do my own research” trope, which “may sound scientific but in practice it amounts to believing there is no objective truth.” (I’d say it means finding pseudo-information on the internet.)
Further here, populists rebel against know-it-all elites, whose assertions are rejected as mere smokescreens to validate their power and status. This oddly echoes the woke left similarly holding that everything is about power — oppressors versus the oppressed. Yet while populists are cynical toward conventional information sources, they weirdly trust ones like the Bible, dodgy websites, or a Trump.
Another concept is self-correcting versus non-self-correcting systems. Science is the former, religion the latter; democracy the former, totalitarianism the latter. You might suppose AI is self-correcting, given the whole machine learning thing. But if AI supersedes all other information sources, that’s a recipe for non-correction. AI doesn’t know what it doesn’t know.
Harari discusses modern surveillance technologies that make 1984’s Big Brother regime look like a libertarian paradise. He details Iran’s high-tech system for enforcing women’s hijab requirements. Ubiquitous cameras with facial recognition spit out smartphone warnings in seconds, with punishments for non-compliance. And Harari foresees populations being governed by “social credit” systems wherein algorithms pervasively monitor behavior and rate us for conformity to specified norms. China already enforces just such a system.
Yet Information is also crucially important to democracy. People can’t debate and reach decisions with no knowledge of what’s going on. That indeed is why broad-scale democracy is only a modern development. In earlier times few had access to education, or news, not even TikTok. Leaving people clueless about the wider world around them.
Today we are inundated with such information; but there’s a huge problem. The algorithms governing platforms like Facebook or YouTube are engineered to promote “engagement,” to maximize advertising revenue. That means pushing content getting people’s juices going — screaming nonsense crowding out moderate rational discourse. Harari details how Myanmar’s murderous pogrom against Rohingyas was enflamed by Facebook actively promoting extremist voices.
Don’t people resist manipulation? Too often — no. We need a foundational background knowledge and understanding of how the world really is. But people who once got that from Walter Cronkite and newspapers are now fed so much junk from smartphones that it’s hollowed out their brains. Those who can’t otherwise make sense of things are easy prey for conspiracy theories, like QAnon, with simplistic stories they do find understandable. And for demagogues.
We’ve seen how bad actors tried to mess with elections. And whereas bots were used for spreading content initially created by humans, now AI can make it diabolically seductive. Harari cites a study wherein people proved good at seeing through human-produced disinformation — but fell for craftier AI-generated stuff.
A key basis for social order is social trust. That all the institutions and structures within which we function can be relied upon. But polls show people have declining trust in others. A partial cause may be smartphones causing reduced face-to-face social interactions. Believing others less trustworthy can become self-fulfilling if we behave in accordance with that belief. Harari fears that with AI, and especially all the nonsense flooding the infosphere, people will lose the ability to trust anything or anyone. Deadly for the future of human society.
But maybe, he suggests, we won’t even need other people any more — with AI becoming everything to a person, providing one’s whole nexus with the world and shaping our feelings about it. Not mentioned is the 2013 film Her, with a romance between a man and a (clearly conscious) computer operating system.
Another movie coming to mind is 2008’s Wall-e, where the humans are bloated nothingburgers strapped into orbiting capsules where they’re kept fed and entertained, mindlessly.
A further big issue is what Harari calls the “alignment problem” — AI tackling tasks in ways that don’t align with human intentions (like in The Sorcerer’s Apprentice), because its mind works differently. Philosopher Nick Bostrom hypothesized a program told to maximize paper clip production — resulting in a world full of paper clips but without humans.
AI alienness was also demonstrated in 2016 when a program, AlphaGo, battled a top master of the ancient game of “Go,” considered more complex than chess. One AlphaGo move baffled expert observers, confounding all they knew of “Go” strategy. Yet it proved a killer move.
The “alignment problem” was considered serious by OpenAI, developer of the GPT-4 AI system. So they put it to the test, by asking GPT-4 to crack a CAPTCHA — one of those puzzles specifically designed to distinguish humans from bots.
GPT-4 could not solve it. But then it went to the TaskRabbit website to engage a human to do so. The human was suspicious, and asked, “Are you a robot?” GPT-4 said no, claiming a visual impairment impeding its solving the puzzle. The human then complied with the request.
As Harari notes, that kind of weaselly behavior was never envisioned by those programming GPT-4. But given the goal of overcoming the CAPTCHA, it figured out a way.
Meantime — while Harari talks in terms of a global AI tyranny — today’s world seems increasingly divided between autocracies and more or less democratic nations. Which could deepen with each side exploiting AI. Furthermore, whereas in the cold war, mutually assured destruction prevented nuclear conflict, stealthy cyber-war may be very different. Harari nevertheless vaguely expresses hope for global cooperation — yet says that if the the law of the jungle really reigns, its alpha predator could be AI.
There is a different possible perspective on all this, which Harari doesn’t address. In the book it’s all about humans on the one hand and, on the other, computers and AI. But what is a “human?” We think of ourselves as strictly biological entities, in clear distinction from everything mechanistic. Yet that distinction is already crumbling, with technology used to repair and even supplant and enhance parts of ourselves.
In a 2013 Humanist magazine article, The Human Future: Upgrade or Replacement? I foresaw not conflict between us and machines but rather a convergence. A continued blurring of the dividing line, with our biological aspects receding in favor of a more mechanistic character. Evolving beyond our biological limitations. Will this be “Humanity 2.0?” Yet they should still have minds like we’re accustomed to. They’ll still be our children; still be us.