Tuesday, April 29, 2025

A bunch of tulips

The concept of superintelligence just isn't that useful

Dwarkesh Patel interviews Ege Erdil and Tamay Besiroglu, co-founders of Mechanize, a startup dedicated to fully automating work. Before founding Mechanize, Ege and Tamay worked on AI forecasts at Epoch AI. [Dwarkesh is an angel investor.] The interview runs a bit over three hours and covers a lot of ground. This is the section on superintelligence, a useless concept if ever there was one.

Dwarkesh Patel 02:29:48

I get your argument that thinking about the economy-wide acceleration is more important than focusing on the IQ of the smartest AI. But at the same time, do you believe in the idea of superhuman intelligence? Is that a coherent concept in the way that you don’t necessarily stop at human level Go play and you just go way beyond it in ELO score? Will we get to systems that are like that with respect to the broader range of human abilities? And maybe that doesn’t mean they become God, because there’s other ASIs in the world. But you know what I mean, will there be systems with such superhuman capabilities?

Tamay Besiroglu 02:30:27

Yeah I mean I do expect that. I think there’s a question of how useful is this concept for thinking about this transition to a world with much more advanced AI. And I don’t find this a particularly meaningful or helpful concept.

I think people introduce some of these notions that on the surface seem useful, but then actually when you delve into them it’s very vague and kind of unclear what you’re supposed to make of this. And you have this notion of AGI which is distinguished from narrow AI in the sense that it’s much more general and maybe can do everything that a human can do on average. AI systems have these very jagged profiles of capability. So you have to somehow take some notion of average capabilities and what exactly does that mean, it just feels really unclear.

And then you have this notion of ASI, which is AGI in the sense that it’s very general but then it’s also better than humans on every task. And is this a meaningful concept? I guess it’s coherent. I think this is not a super useful concept, because I prefer just thinking about what actually happens in the world. And you could have a drastic acceleration without having an AI system that can do everything better than humans can do. I guess you could have no acceleration when you have an ASI that is better than humans at everything, but it’s just very expensive or very slow or something. So I don’t find that particularly meaningful or useful. I just prefer thinking about the overall effects on the world and what AI systems are capable of producing those types of effects.

Dwarkesh Patel 02:32:06

Yeah I mean one intuition pump here is: compare John von Neumann versus a human plucked from the standard distribution. If you added a million John von Neumanns to the world what would the impact on growth be as compared to just adding a million people from normal distribution?

Ege Erdil 02:32:25

Well I agree it would be much greater.

Dwarkesh Patel 02:32:27

Right. But then because of Moravec paradox-type arguments that you made earlier that evolution has not necessarily optimized us for that long along the kind of spectrum on which John von Neumann is distinguished from the average human. And given the fact that already within this deviation you have this much greater economic impact. Why not focus on optimizing on this thing that evolution has not optimized that hard on, further?

Ege Erdil 02:32:51

I don’t think we shouldn’t focus on that. But what I would say is, for example if you’re thinking about the capabilities of Go-playing AIs, then the concept of a superhuman Go AI, yeah, you can say that is a meaningful concept. But if you’re developing the AI, it’s not a very useful concept. If you just look at the scaling curve, it just goes up and there is some human level somewhere. But the human level is not privileged in any sense. So the question is, is it a useful thing to be thinking about? And the answer is probably not. Depends on what you care about. So I’m not saying we shouldn’t focus on trying to make the system smarter than humans are, I think that’s a good thing to focus on.

Dwarkesh Patel 02:33:31

Yeah I guess I try to understand whether we will stand in relation to the AIs of 2100 that humans stand in relation to other primates. Is that the right mental model we should have, or is it going to be a much greater familiarity with their cognitive horizons?

Tamay Besiroglu 02:33:49

I think AI systems will be very diverse, and so it’s not super meaningful to ask something about this very diverse range of systems and where we stand in relation to them.

Dwarkesh Patel 02:33:59

I mean, will we be able to cognitively access the kinds of considerations they can take on board? Humans are diverse, but no chimp is going to be able to understand this argument in the way that another human might be able to, right? So if I’m trying to think about my place, or a human’s place, in the world of the future, is a relevant concept of; is it just that the economy has grown a lot and there’s much more labor, or are there beings who are in this crucial way super intelligent?

Tamay Besiroglu 02:34:28

I mean there will be many things that we just will fail to understand, and to some extent there are many things today that people don’t understand about how the world works and how certain things are made. And then how important is it for us to have access or in principle be able to access those considerations?

And I think it’s not clear to me that that’s particularly important that any individual human should be able to access all the relevant considerations that produce some outcome. That just seems like overkill. Why do you need that to happen? I think it would be nice in some sense. But I think if you want to have a very sophisticated world where you have very advanced technology, those things will just not be accessible to you. So you have this trade-off between accessibility and maybe how advanced the world is. And from my point of view I’d much rather live in a world which has very advanced technology, has a lot of products that I’m able to enjoy, and a lot of inventions that I can improve my life with, if that means that I just don’t understand them. I think this is a very simple trade that I’m very willing to make.

Claude 3.7 describes a country scene along with three variations created by ChatGPT

Sometime in the last two or three years I began thinking about the fact that the same transformer architecture that is used to create large language models (LLMs) is also used to create images. In the case of language models, the learning procedure involves predicting the next token. In image models it’s about predicting the next pixel. Thus, on some level and in some way a written text and a visual image can be treated as the same kind of thing.

So, how can we think about a written text as some kind of visual object. That’s really quite simple. Imagine that each word or each character is a colored bead. If you think at the word level, you’ll need lots of different colors, 10s of thousands of them. If you think at the character level, a much smaller number of colors will do, 48 for the letters (upper and lower case), one for a space, and on the order of 10s for punctuation marks. Whatever level you choose, a text becomes a collection of colored beads. Once you’ve done that it’s easy to arrange the text into one long string and feed all those strings to the transformer. That is to saw, the transformer architecture treats verbal texts as one-dimensional visual objects.

We don’t have to do anything fancy to treat images as strings of pixels of various colors. Just rasterize the images and code the colors with some appropriate convention and you’re done.

Now, all we have to do is come up with a way to think about how to translate back and forth between these two forms of representation so that we can use language to direct a bot to create an image and use a different bot to describe an existing image. I’ll leave thinking about that as an exercise for the reader, noting only that I’ve got a good number of posts where Claude describes images, and a few where ChatGPT modifies existing images according to simple verbal instructions.

We’re going to see both operations in this post. First I’m going to ask Claude 3.7 to describe an image. Then I’m going to post three copies I had ChatGPT make, each in a different style. Then I’m going to post the conversation I had with Claude about those copies. As always, my comments are in bold face while Claude’s are plain face.

Claude describes a simple country scene

Describe this picture.

This is a colorful drawing of a whimsical village scene with vibrant houses nestled in rolling green hills with mountains in the background. The houses have distinctive architectural features - some with tall spires, others with multiple windows, in bright colors including red, orange, yellow, and blue.

In the foreground is a flower garden full of colorful tulips in red, yellow, orange, and purple, along with blue forget-me-nots or similar small blue flowers. The drawing has a folk art or naive art style with bold colors and simplified forms.

The scene has a storybook quality with its cheerful colors and quaint buildings arranged on the hillside. There appears to be a small signature or date in the bottom right corner of the image, though it's not entirely clear what it says.

The drawing appears to be done with markers or colored pencils on paper, with a charming handmade quality to the artwork.

Right. I belong to an art club where we start with a line drawing and then color it as we wish. Some club members use colored pencils, others use markers. I used a marker.

That's wonderful! Art clubs are such a great way to express creativity in a supportive community. I like how everyone starts with the same line drawing but produces unique interpretations through their color choices and techniques. Your marker work gives the piece a particularly vibrant and bold quality - the colors really pop against each other, especially in the flower garden with those bright tulips against the green background.

Does your art club meet regularly to work on these projects together? The village scene has such a cheerful, storybook quality that feels uplifting.

We meet twice a week.

ChatGPT’s three copies

Note: I did not have ChatGPT make these copies in independent trials. It made all of them in the same session. It did the illuminated manuscript first, then the Mughal miniature, and the Japanese print last. The last two copies have a feature that's missing from the first.

Count the number of flowers and compare with the original. Do the same for the buildings. Notice the small structure at the lower right. That's not in the original image, but if you look closely at the original you'll see portions of some kind of wall peaking out between the leaves of the flowers. This is the third copy that ChatGPT made, but it's the first one Claude commented on.

In defense of an online life

That's the title of a new post by Tyler Cowen over at Marginal Revolution. It's derived from a piece Cowen published at The Free Press, Why I (Often) Choose My Phone Instead of Flesh and Blood. Here's part of what Cowen posted at Marginal Revolution:

The internet, in other words, has invented a new means of human connection, characterized by “the perfect people for me.” For me, it’s people who are into analytical thinking and tech and AI and music and economics, and much more. For others? It can be Survivor obsessives or vegans or knitters or Survivor obsessives who are vegan and love to knit. The point is that there is a niche for all 8 billion of us. And now we know where to find each other.

And it turns out we value that very, very highly. So highly that we are willing to obsess over our little devices known as smartphones.

Here's a comment I posted at Marginal Revolution:

Makes perfect sense to me. The online world is well-suited to intellectual life, though you have to pick and choose carefully. The fundamental point is that, to a first approximation, you can communicate with any thinker who is online without having to worry about institutional barriers.

That's what makes the online world essential to me. I am by temperament an intellectual explorer, going boldly where none have gone before. What's utterly remarkable, that's how I was trained as well, first at Johns Hopkins then later at SUNY Buffalo. Why's that remarkable? Because going boldly is NOT what the academic world is about, though that's what it thinks it's up to. I mean, how do you train someone to do that, to be a Leif Erikson, Marco Polo, Christopher Columbus, or Morton Stanley? There's no training for that, no way to institutionalize.

No, the academic world starts when the territory has been explored and preliminary maps drawn up. Now, a few academic will slip through the cracks and explore new territory. But most do not. Some particularly venturesome academics will take those preliminary maps and go in there and start building infrastructure, roads, water wells, logging operations, etc. A less venturesome group will buy plots of land and start building on it. At the other end of the spectrum we have brokers who traffic in existing properties.

I see Tyler as a diversified real estate investment firm. A very few of his Emergent Venture picks will explore new territory, but he has no idea which ones and since many/most are quite young and not yet mature, they are somewhat insulated from his own preferences and mood affiliations. For the rest of it, he's invested all over the place, every continent, every type of terrain, at various levels of development.

As for me, Johns Hopkins let me undertake a structuralist analysis of "Kubla Khan" and awarded me a master's degree for the result, which is an analysis of a poem unlike any that had been done at the time (I eventually published a somewhat revised version, Articulate Vision: A Structuralist Reading of ”Kubla Khan”, 1985). Though I didn't think about it this way at the time, it's clear in retrospect that that's when I left the profession, I "lit out for the territory," to borrow a phrase from Mark Twain. Which implies, then, that I was doing postdoc work for my Ph.D. at SUNY Buffalo. I was in the English Department, but my real teacher was David Hays, a computational linguist in Linguistics. My 1978 dissertation, "Cognitive Science and Literary Theory," was as much a quasi-technical exercise in knowledge representation as it was literary theory. I used Shakespeare's Sonnet 129, "The Expense of Spirit," as an example and got two publications out of it: Cognitive Networks and Literary Semantics (1976), Lust in Action: An Abstraction (1981). I took a faculty job at Rensselaer Polytechnic in 1978, failed to get tenure, and left in 1985. I spent two years looking for another academic post and failed. I've been a ronin scholar ever since.

I kept on publishing, did two books (Visualization: The Second Computer Revolution, 1987, and Beethoven's Anvil: Music in Mind and Culture, 2001), and a bunch of articles. By the early 2000s the web had developed to the point where I could re-engage with the larger world – actually, that got started for me in the mid-90s with various email mediated forms. There's no point in detailing all that, but if you're curious, I talk about it at some length here, Personal Observations on Entering an Age of Computing Machines.

Monday, April 28, 2025

Yum!

When AI became decoupled from any understanding of the human mind

I’ve been thinking a bit about the history of AI and how it led to the current situation where it has become effectively decoupled from any attempt to understand the human mind. Now, I’m not thinking about AI across the board, but rather the regnant forms of machine learning that dominate in the commercial market and so dominate current discussions about the implications of AI. Those discussions have become effectively divorced from the study of the human mind in the cognitive sciences, but also the humanities, something I’ve discussed in an article at 3 Quarks Daily, Aye Aye, Cap’n! Investing in AI is like buying shares in a whaling voyage captained by a man who knows all about ships and little about whales.

AI began as an attempt to simulate the human mind. The people who did the work also thought about the mind. AI work on chess led to psychological investigation of how humans played chess. The most commercially successful early AI programs were so-called expert systems, from the 60s on into the 80s. To develop such a system you would ask human experts to think through problems out-loud so you could record their thoughts. The recordings would then be transcribed. This became developed into a systematic methodology called “protocol analysis.” My point is simple, this AI work was closely linked to work on human thinking.

The big breakthrough in machine learning came in 2012 with a machine vision system called AlexNet, developed by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. It was based on something called a convolutional neural network. CNNs are based on Fourier analysis, which had been used in understanding the visual system going back to the late 1960s. So, at this point the technical basis of the artificial system remained in touch with the study of human perception.

That changed with the development of GPTs. The technical basis of those systems had nothing to do with the technical system of language and cognition. With GPT-3 things exploded. Its language capacity was far beyond anything else that had been done. The field quickly figured out that they could improve performance simply by scaling up, more data, more compute, more parameters. Doing this didn’t require deeper insight into language and thought. It required two things: 1) knowledge of how to scale systems up, a highly developed craft skill, and 2) the money needed to pay for the increased resources. The enterprise was now effectively decoupled from any attempt to understand the human mind.

Of course, no one’s happy that the inner workings of LLMs are mysterious. It makes so-called “alignment” a hellish problem. At the same time, the fact of that mystery makes it easy to imagine whatever you wish about the technology. Thus the black box nature of these systems is convenient for the generation of hype. You can imagine future capacities to be whatever you will. Reality is not going to get in your way, and least not in the present.

A new green day in the park

Ignorant White House techbros are destroying our seed corn

David Singer, White House Tech Bros Are Killing What Made Them (and America) Wealthy, NYTimes, April 28, 2025.

What’s seed corn? It’s corn the farmer preserves through winter so it can be used to seed next year’s crop. It’s a metaphor.

One would think that venture capitalists, especially those with ties to the Trump administration, would be the most forceful champions of America’s research universities, given how much these institutions have fueled our careers and fortunes. Instead, many of us are scratching our heads as to why officials from the industry have turned their backs while the government chaotically terminates funding for this work. Harvard and Columbia have been in the headlines, but the hatchet has also fallen on Michigan State in the Midwest and the University of Hawaii farther west. It is as if the V.C.s in Washington had just enjoyed a fine meal in Silicon Valley and decided to skip out on the check.

Breakthroughs in technology are grounded in a fundamental truth: that transformative innovation often begins with a new understanding of the natural world at its most basic level. And this understanding almost always emerges from challenging accepted wisdom. That requires space for free inquiry and a culture that protects it, something that Vannevar Bush understood in his landmark 1945 report “Science, the Endless Frontier,” where he argued that basic research generates “scientific capital” — the foundation for practical applications, new products and new processes. Even patent law reflects this principle, requiring that an invention be “nonobvious to one skilled in the art.” This is the crux of the matter.

Drawing a causal link between federal investment in basic science research and the rise of the venture capital industry is about as difficult as reading a map. The geographic centers of venture capital and the industries it has spawned overlap precisely with the locations of our great research universities. Think of Cambridge and Route 128 in Massachusetts (Massachusetts Institute of Technology and Harvard), or the stretch from San Jose to San Francisco (Stanford and University of California, San Francisco and Berkeley). This is no accident. It’s why world leaders visit these places to understand how we do it. It is also why Mr. Vance left Ohio for Yale and then high-tailed it to Silicon Valley for a job.

Sunday, April 27, 2025

First Iris of the year (taken on April 26)

Academia Shrugged

Well, not quite. But who knows how this will develop. Stephanie Saul and Alan Blinder for the NYTimes (April 17, 2025): Emerging From a Collective Silence, Universities Organize to Fight Trump.

The Trump administration’s swift initial rollout of orders seeking more control over universities left schools thunderstruck. Fearing retribution from a president known to retaliate against his enemies, most leaders in higher education responded in February with silence.

But after weeks of witnessing the administration freeze billions in federal funding, demand changes to policies and begin investigations, a broad coalition of university leaders publicly opposing those moves is taking root. The most visible evidence yet was a statement last week signed by more than 400 campus leaders opposing what they saw as the administration’s assault on academia.

Although organizations of colleges and administrators regularly conduct meetings on a wide range of issues, the statement by the American Association of Colleges and Universities was an unusual show of unity considering the wide cross-section of interests it included: Ivy League institutions and community colleges, public flagship schools and Jesuit universities, regional schools and historically Black colleges.

We speak with one voice against the unprecedented government overreach and political interference now endangering American higher education,” the statement said.

Although it contained no concrete action, and what’s next was unclear, the collective stance reflected a group more galvanized than ever to resist.

400 may seem like a lot, but given the number of colleges and universities in the country, 6000, it's not a lot. Still, it's a start. At least someone is trying to say NO.

As for the title of my post, I'm referring to Ayn Rand's last novel, Atlas Shrugged:

The book depicts a dystopian United States in which heavy industry companies suffer under increasingly burdensome laws and regulations. Railroad executive Dagny Taggart and her lover, steel magnate Hank Rearden, struggle against "looters" who want to exploit their productivity. They discover that a mysterious figure called John Galt is persuading other business leaders to abandon their companies and disappear as a strike of productive individuals against the looters. The novel ends with the strikers planning to build a new capitalist society based on Galt's philosophy.

We're now living a dystopian novel in the United States. As for the universities withdrawing, what would that mean? I suppose it would mean the faculty. Where would the faculty withdraw too? Greenland? The total population of Greenland is about 60,000. There are roughly 1.5 million faculty in American colleges and universities. They aren't going to Greenland. Where could they go? Well, since the Feds appear ready to abandon Africa, they could go there. Maybe they could hide out in Wakanda behind the stealth technology provided by the Marvel Cinematic Universe. Who knows, maybe they'd even hook up with The Mystic Jewels for the Propagation of Grace, Right Living, and Saturday Night through Historical Intervention by Any Means Necessary.

Tulips gone wild!

Hollis Robbins worries about the moral emptiness of the AI business

She's reviewing a current book:

In 2018, I was in the audience at a Pitch Day event in San Francisco as two computer science majors pitched to potential investors an app that allowed them to jump the beer line at the stadium so they wouldn’t miss any of the game. The deck was crisp and compelling. The young men were good looking, confident, and articulate. The idea? I left while everyone was applauding.

I thought back to this moment while reading Alexander Karp and Nicholas Zamiska’s bracing new book, The Technological Republic: Hard Power, Soft Belief, and the Future of the West (Crown Currency, 2025). I thought of the pitch again this week when I saw a NYT front page story about Phoebe Gates, daughter of Bill Gates and Melinda French Gates, and her new online shopping tool.

What led an entire generation to spend its energies on vanities? Why is the apex of world historical advances in technology just another phone app that matches people to things (and other people) efficiently? Where is the collective patriotic fervor and moral grounding of eras past? Is the problem political? Cultural? What would it take to turn Silicon Valley’s productive energies toward the safety and flourishing of our nation?

These are just a few of the provocative questions raised by Karp, co-founder of Palantir, and his co-author Zamiska, Palantir’s legal counsel and head of corporate affairs, in their bestselling book. The growing praise suggests that these questions have been pressing for some time.

In their call for the shiny app-building sector to put aside childish things and turn toward more serious and patriotic endeavors, the authors might have also noted the damage done to the higher education market.

Robbins spends some time taking a careful look at the book, but here's her final paragraph:

The Technological Republic offers a compelling diagnosis of the technology sector’s drift from national purpose toward frivolous consumerism. Yet in calling for a renewed technological republic built on ownership and cultural cohesion, Karp and Zamiska leave a crucial question unanswered: what role will the humanities, the disciplines that cultivate "truth, beauty, and the good life,"play in this reimagined future? If shared culture, language, and storytelling are as essential to national solidarity as the authors argue, then those who teach these traditions deserve more than a footnote in their vision. Without integrating what we do into the ownership culture, Karp and Zamiska risk reproducing the problem their book identifies: a society rich in technological capacity but impoverished in meaning, purpose, and collective identity.

I offered a long comment:

I started reading this, Hollis, and started getting impatient about a quarter to a third of the way in, so I did what I often do in these situations. I skipped all the way to the end to see where this is going. “Yet in calling for a renewed technological republic built on ownership and cultural cohesion, Karp and Zamiska leave a crucial question unanswered: what role will the humanities, the disciplines that cultivate “truth, beauty, and the good life,”play in this reimagined future? If shared culture, language, and storytelling are as essential to national solidarity as the authors argue, then those who teach these traditions deserve more than a footnote in their vision.” That’s all I need. I am quite willing to assume that you are a competent reader of this book and so you rummaged around between lines looking for at least some scraps of awareness. As far as I can tell, the people who build this technology, who fund it, who rhapsodize about how wonderful it is, and who natter on about the need the build, they’re narrowly educated people who don’t know what they don’t know and are proud of it.

My standard analogy for this situation, crude as it is, is that the current AI enterprise is like a 19th century whaling voyage where the captain and crew know all there is to know about their ship. They can get more speed out of it than any other crew, under any conditions, they can tack into the wind, they can turn it, if not on a dime, at least on a $50 gold piece. If whaling were about racing, they’d win. But whaling isn’t about racing, it’s about killing whales. To do that you have to understand how whales behave, and you have to understand the waters in which the whales live. On those matters, this captain and crew are profoundly ignorant; they haven’t even sailed around Cape Horn.[1]

That’s the AI industry these days.

I got interested in the computational view of mind decades ago. Why? Because I set out to do a structuralist analysis of “Kubla Khan” and couldn’t make it work. I ended up with an analysis that didn’t look like any structuralist analysis I’d ever seen, nor any other kind of literary analysis. The poem was structured like a pair of matryoshka dolls, it looked like a pair of nested loops.

I ended up writing a dissertation which was as much a quasi-technical exercise in computational linguistics as in literary theory. I chose one of Shakespeare’s best known sonnets, 129, The Expense of Spirit, as my example, and published my analysis in the 100th anniversary issue of MLN: Cognitive Networks and Literary Semantics. That represents a serious attempt to come up with a computational analysis of a profound and deeply disturbing human experience, compulsive sexuality.

The current crew will tell you, I’m sure, that that represents old technology, symbolic technology, which has been rendered obsolete by machine learning. Guess what? David Hays (my teacher and mentor) and I both knew that symbolic technology was not fully up to the job, that it had to be grounded in something else. And we were working on something else at the time, but meanwhile we did what we could with the tools we had. My point is that in order to conduct the analysis had I to spend as much time thinking about human behavior and language as I did about the technical devices of knowledge representation. Whatever success I may have had in that work, I paid for it in thinking about the human mind.

The current regime is quite different. They don’t have to think about the human mind at all. If Claude is capable of writing decent prose, well, that didn’t cost the folks at Anthropic anything. They got it for nothing. And so that’s the value they place on the human mind. For them I’m afraid “truth, beauty, and the good life” are just empty words they trot out for the hype. Theirs is an Orwellian technology. They’re stuck on the wrong side of 1984.

[1] As I’m sure you know, Mark Andreessen likes to use whaling as a precedent for venture capital. Out of curiosity, I did a little digging and found an article by Barbara L. Coffee in the International Journal of Maritime History, “The nineteenth-century US whaling industry: Where is the risk premium? New materials facilitate updated view.” It’s quite interesting. Those whaling captains kept good records, and those records have been preserved. After examining the records of 11,257 voyages taken between 1800 and 1899 Coffee concluded: “During the nineteenth century, US government bonds, a risk-free asset, returned an average of 4.6%; whaling, a risky asset, returned a mean of 4.7%. This shows 0.1% as the risk premium for whaling over US government bonds.” What are the chances that current investment in LLMs will do better? Oh, there will be some success, but averaged across the whole industry and over the longterm?

Saturday, April 26, 2025

Today's visit to Mickey D's

The Rise of the Woke Right | Glenn Loury & Daniel Bessner

0:00 Intro
1:10 Glenn meets the young Marx
2:32 The new wokeness of the political right
8:30 What is the democratic counterpart of foreign policy and national security run by experts?
17:04 Russia’s motivations: Fear of NATO or imperial ambitions
21:41 Danny: Zelenskyy made a huge misstep in the Oval Office
26:12 Does China have global security ambitions?
35:09 Ending the Middle East’s post-Ottoman redefinition
39:42 Danny: It’s gotta be rough to be Peter Beinart
43:46 Why we don’t need Hitler to explain Trump’s authoritarianism
56:29 Danny: We’ve been in the midst of a constitutional crisis since the 1940s 57:58 Can the nuclear taboo hold?

Glenn Loury (Brown University, Self-Censorship) and Daniel Bessner (University of Washington, American Prestige, Democracy in Exile). Recorded April 19, 2025.

On the streets of Mumbai [Desi beats]

YouTube page:

We put together a DJ set, shot in a moving auto rickshaw, winding through the lively streets of Mumbai!

Auto rickshaws in our city are like mobile party units, with drivers blasting everything from Bollywood hits to retro classics, turning a short ride into a mini club on wheels. This video pays an ode to all the rickshaw drivers, working hard to make our rides not only pleasant, but a bit more entertaining!

Behind the decks, is the talented singer-DJ-producer from Mumbai - Suggahunny aka Desiree Saldanha. She describes her sound as equal parts playful and psychoactive, all with one goal - to get you moving. Join her on an exciting musical joyride spinning the block while dishing out a high-energy mix of delicious desi-leaning tunes and then some.

At Gently Altered, our effort is to design and produce experiences (and from today, content) that is right ahead of the curve. We have some great things in store for you, stay peeled!

Friday, April 25, 2025

Friday Fotos: ChatGPT changes Manhattan's West Side

In the last two years I spent a lot of time having ChatGPT create new stories from old. I’d give it a story and ask it to create a new story on based on, but with a difference that I specified. It could make any other changes it wished. I wrote that work up in a working paper: ChatGPT tells stories, and a note about reverse engineering: A Working Paper, Version 3.

I’m now doing the same thing with images. Here’s a photo I shot of Manhattan’s West Side.

Here’s two copies that ChatGPT made. First I asked it to have an airship dock at the top. It appears to me that it had the airship dock at the top of a radio antenna, rather than the docking arm, which was never used as such. It made other changes as well. Notice, for example, where the Chrysler Building is, or what appears to be the Chrysler building, which wasn’t even visible in the original photo. Note also that it kept the airship.

I had it repeat the process, but this time making Empire State Building into a launch pad for a giant rocketship. It did that, but also kept the airship. The complement of buildings has change a bit as well. The Chrysler Buiding lookalike is gone.

I asked it to get rid of the airship. Which it did. The surrounding cityscape seems at bit different.

So I asked it to dump the airship. Which it did. But the rocket launcher has changed and the building appear to have been ‘re-skilled.’

Here’s another photo of the West Side. This one is centered on Hudson Yards, which is off to the left in the previous series.

Here’s what I told it to do in the copy: “Replace the buildings in the center along the shore with a space port that might be used to ferry passengers to and from space stations in orbit around the earth. Make any other changes you think are needed to make the photo consistent with that.” Here’s the result:

I have no idea where those rockets are being launched from.