I think our entire ontology for how we talk about and conceptualise A[G]I is confused. And I wouldn't be surprised if in ten years we will look back at the discourse today and laugh at how primitive some ideas are. A few hot and uncertain takes:
— Séb Krier (@sebkrier) September 18, 2025
The way people talk about future… pic.twitter.com/1o5oilqzQh
From deeper into this long tweet:
The way people talk about future AIs/AGIs feels like a category error. Sometimes they reify future systems as self-sovereign entities with their own goals and incentives, a different species that we need to learn to co-exist with. I think that's not impossible, and I used to be a lot more sympathetic to this view, but I'm a lot less certain now and it's certainly not self-evident. Agents can still be tools, and tool agents that operate along timelines don't need to necessarily be 'separate species'-like. [...]
To me at least, AGI will likely be a distributed ecosystem of different models, built by different companies and state actors, with different capabilities, architectures, and incentive structures.
