Reflections on the ai bubble in san francisco
06-09-2025 • Ryan Prendergast
I spent three months living in San Francisco, and everybody is talking about AI. AI is hot! "Cursor for X" is the new "Uber for X"; kids are dropping out of college-- no, high school!-- to build voice agents for sales calls. Every 2010s B2B SaaS vertical is fair game: just build the same thing, but AI native. Slack with AI, Gmail with AI, AI CRMs, AI voice agents as a restaurant receptionist, AI massage receptionist, AI booking agent. AI for trucking, AI for lawyers, AI for hospitals. It all seems tremendous, because people are remarkably willing to pay for AI in a way they were not willing to pay for software in the 2000s or 2010s. You can ship a product and get $100k in ARR in a month.
San Francisco is doing what San Francisco does best: it's a gold rush. Everybody is raising $3m at a $25 million valuation, hell make it $50 million if you have some revenue.
Everybody around me is an AI maximalist. There is a lot of hype that white collar professions are "over": computer science is dead, consultants are dead, lawyers are dead, etc.
My prediction is that LLMs will turn out to be an advanced toaster. They will not become conscious and take over the world. We will not see mass white collar unemployment in the 2020s or 2030s. We will continue to see the rates of model improvement of 2022-2025 for the next year or two, before a plateau in utility as the enshittification process begins.
Enshittification
Imagine someone invents a new way of teaching and releases a headline: they achieved top percentile test scores with this new method. They try to scale it, but fail. It turns out, the test group was a group of abnormally motivated, high achieving students. It was selection, not pedagogy!
ChatGPT today is much like Google in the 2000s: a tool whose utility depends largely on its selection effects, not just the tool itself. When the internet was more niche, a high percentage of users were college educated, thoughtful types. A high percentage of forums and websites were by and for these users. And so a person would search a question on google and be amazed that the search engine gave educated, thoughtful responses! Like any good signal, there was money to be made. SEO and advertisers flocked, and the search signal quickly degraded. The same thing happened with social media. Scale and financial incentives ruin all signals, because active signals are a market inefficiency.
The same thing will happen to LLMs. Much of their utility is in their selection: they were trained on every book, every wikipedia article, Github repo, and tons of text conversations. A person goes on ChatGPT and get a response that sounds like the consensus of all the books, encyclopedias, code repos, and text conversations. It's really cool. You can talk to "The Other". But the signal is out. There are already a dozen GEO companies, and everybody is moving fast to cash in on the new signal.
The best they'll ever be
AI growth relies heavily on the scaling hypothesis: as we add more data, the models get better. Yud often quotes under new model releases: "remember, this is the worst they'll ever be" But what if this is the best they will ever be?
We have started to see this. ChatGPT becomes more of a sycophant every release. 6 months ago, I could ask it a question like "is XYZ normal / reasonable" and it would give an answer that tracks the general consensus. That was a killer use case: fuzzy consensus search. What is the consensus of the book/encyclopedia/commentary realm? Now it always says yes. "Yes, you're asbolutely right! Oh, I'm so sorry, you're correct!"
So many people are banking on the models getting much better and much more general in very little time. The promise is strong, and you can raise to disrupt any legacy industry based on this promise. It is overly optimistic, because it fails to account for enshittification.
There is currently an ai bubble in San Francisco, comparable to the dot com bubble of '99. We will see the bubble pop, with a contraction in AI venture activity in <2 years.
AI + Me
I was an early GPT adopter. I was preaching GPT-3 to my college roommates in 2020. I currently use AI every day. Cursor writes a good chunk of my code. ChatGPT is miraculous at fuzzy searches i could never do 5 years ago. It's not like there isn't value being created. That's why I compare this boom to the dot com bubble, rather than the real estate or NFT bubbles. We achieved most of the goals of the dot com bubble! It just took 10-20 years. Commerce is mostly online, banking is mostly online, communication is mostly online, every professional worker spends most of their workday online. LLM chatbot assistants, like the internet, will permeate white collar work. It will take 10-20 years.
Right now the money is too easy and the promises too big. That's why i'm calling a bubble. My gut says it's too good to be true. Viral shitposters like the "cheat on everything" guy, people throwing together demos that don't really work, promises of AI for everything. These AI companies, like their home city of San Francisco, will bust as quickly as they boomed.
Like DOGE, whose participants assumed massive fraud in government programs they couldn't find, many AI for X players will find that the bottleneck in legacy industries is NOT better software. They might be surprised to find these legacy industries are far more organizationally efficient than the big tech companies they come from. Hypergrowth hides lots of waste!