Just around the corner from here Malcolm Murray has a nice article in 3 Quarks Daily, O3 and the Death of Prediction. Here's some excerpts from his article:
A lot of the focus over the past years has been to try to pinpoint when AI will be able to do certain tasks. Various surveys have been run estimating when AI will be able to write a best-selling novel or win math competitions. [...] The real game is to prepare for advanced AI capabilities, whether they are called AGI or not. o3 shows two things clearly - that AI evolution seems set to continue apace, and that we can not predict it and should not attempt to.
Note that the death of prediction in the AI space does not mean the death of forecasting. Forecasting will still have its place. Whereas prediction is the non-scientific, crystal ball, finger-in-the-air activity beloved by media pundits, forecasting is a more scientific endeavor, which will still be valuable. Forecasting, especially in the form invented by Philip Tetlock - Superforecasting (disclosure: I am a Superforecaster) - means careful thinking regarding the applicability of historical base rates and adjusting them based on clear current trends. This can yield still very accurate forecasts of future events, at least a few years out.
But the fields of AI safety and AI risk management should turn their focus to resilience. Normally in risk management, risks are analyzed by their potential impact as well as their probability and their likely time to materialize. Focusing on resilience, however, means putting aside the probability and time frame and focusing on the impact. This is a different mindset. It is saying that we don't know if or when this risk will arise, but if the impact of the risk is large enough, we should make adequate preparations regardless. Preparation takes time, so it is high time to start.
I wrote a rather long reply:
I agree with you, we can't predict, and we should certainly give more attention to societal resilience, much more attention. But I'd like to say a word in defense of Marcus and co. Because we need more than societal resilience. We also need intellectual and technological resilience. Marcus is certainly calling for that.
Marcus has studied human cognition and language. Many (of us) skeptics have. They know something about how the mind works. These deep learning folks, not so much as far as I can tell. They don't even know how their very clever devices work - and, if you've been reading 3QD, you know I'm a fan of those clever devices. I use them all the time myself.
It's like a whaling voyage captained by someone who knows everything there is to know about their ship and seamanship, but little to nothing about whales. Just because they can make their ship do fancy and unexpected things doesn't mean that sooner or later they're going to find a whale. Nor does it mean you should discount those who actually know something about whales but may not be so enamored of fancy ships.
Predicting technology developments is very difficult, as you point out. No one can do it. Predicting in the AI space, broadly considered, has been going on for a long time. And it's failed before. Let's take a quick and crude look at that history.
Back in the early days of computing the federal government spent a lot of money for research on computer systems that could translate natural language. They were specifically interested in translating technical documents from Russian to English. This was the 1950s and 60s and the Cold War was in high gear. So researchers made promises, and promises, and promises, and by the early 1960s those promises were looking pretty thin. So a commission was appointed to study the problem. They arrived at two conclusions: 1) There is no immediate prospect of high-quality machine translation. 2) We now have theories and models we didn't have when this work started. More theoretical research looks promising. As you can imagine, the government paid attention to the first conclusion and ignored the second. The field of machine translation was dead for lack of funds. But not completely dead. It rebranded itself as computational linguistics and continued on.
My teacher, David Hays, was one of those first generation researchers. He headed the program at the RAND corporation. He was on the commission that made those recommendations. And he's the one who coined the term "computational linguistics," which is how the field rebranded itself.
In the mid-1980s pretty much the same thing happened with AI. We had the so-called AI Winter. The work was going well, commercial ventures were started. But the desired/expected results weren't forth-coming.
I figure that the so-called "design space" for this technology is huge. The early researchers in machine translation explored one region of the space, and ultimately failed. Call it Region Alpha. A bit later other researchers explored another region of the space, and they too failed. Call it Region Beta. The unexpected success of AlexNet in 2012 opened up a whole new region of the design space, one made available by the use of GPUs. Call this Region Gamma. The unexpected success of GPT-3 showed us new areas within this new Gamma region. The same with o3.
I think what Marcus and others are saying is that AGI, whatever it is, isn't going to be found in Region Gamma. It's in some other region. Why are they saying that? Because they know, we know, something about language and cognition and don't believe it is to be found in Region Gamma. And while you're at it, read a paper Miriam Yevick published back in 1975; she knew something back then that I don't think even Marcus knows about. Will the grail of AGI be found in the next region, Delta, or will it be Epsilon, Zeta...? Who knows.