Why Tech Bros Overestimate AI's Creative Abilities

Silicon Valley's overconfidence in the imminent arrival of Artificial General Intelligence stems from a combination of limited understanding of the humanities, an insular culture, and a business model that incentivizes exaggerated claims about AI's capabilities.

The Internet Movie Database aggregates film reviews from critics, but also allows anyone to write a review themselves. These are occasionally amusing in a film snob way because there are people who will gush about epoch-making brilliance of, for example, horror films that topped the box office for a single weekend and then vanished, both from the charts and cultural memory.

Take Darkness Falls, a forgettable 2003 flick about (spoilers) an evil tooth fairy. It has a Metascore of 23, an IMdb rating of 5.0, and this 10 star review from “d-maxsted.”

Darkness Falls is one and was one of those rare horror movies where all the pieces came together,the director,the crew and the performances by the actors,it simply is a rare example of a what I would consider one of the best and further more you certainly don't get many as good these days. [sic]

I was reminded of that review when I read Kevin Roose’s article in The New York Times about how Silicon Valley is convinced we’re a year—maybe two, maybe three—away from AGI, which Roose defines as roughly “a general-purpose A.I. system that can do almost all cognitive tasks a human can do.”

I’m not an AI skeptic. I think LLMs are already powerful tools with real world uses, and there are many clear ways they can make the world dramatically better. A lot of the arguments in the “this technology is junk” or “AI is just a plagiarism machine” genres don’t stand up to scrutiny.

That said, we’re nowhere near AGI and the reason so many in Silicon Valley are convinced otherwise isn’t that they have some insider knowledge the rest of lack, but that their understanding of, and appreciation for, the full range of “cognitive tasks a human can do” is, to be frank, rather cramped. This is less about technology than it is a culture that fancies itself sophisticated in terms of philosophy, literature, and other topics we lump into the humanities, but has a quite thin appreciation for all of them.

Take Sam Altman’s enthusiasm for OpenAI’s new creative writing model. As he describes it, “we trained a new model that is good at creative writing (not sure yet how/when it will get released). this is the first time i have been really struck by something written by AI; it got the vibe of metafiction so right.”

The “vibes” might be right if your level of metafiction sophistication is that of a precocious high schooler who has yet to take a college level literature course: “Already, you can hear the constraints humming like a server farm at midnight—anonymous, regimented, powered by someone else's need.” Or, “She lost him on a Thursday—that liminal day that tastes of almost-Friday—and ever since, the tokens of her sentences dragged like loose threads…” And so on.

Altman isn’t alone in this, of course. Twitter overflows with examples of tech bros breathlessly claiming that AI-generated video has achieved levels equal to the shot composition of Paul Thomas Anderson or the eye of Roger Deakins. 

A favorite example, which I sadly can no longer find, was an excited techie who’d asked ChatGPT (or maybe it was Claude) to solve philosophy’s famous “trolley problem” and had his mind blown when it gave a (to him) entirely convincing answer. Of course, to someone with an even modest philosophy background, ChatGPT (or maybe it was Claude) had done no such thing. Instead, it regurgitated one of the many canonical answers to the problem, without acknowledging that significant counter-arguments exist, or that this particular canonical answer was just one among many. In other words, it hadn’t solved the trolley problem so much as it had concocted prose that sounded like an answer to someone who had never before seen what sophisticated trolley problem arguments look like.

This pattern repeats. It’s not that AI can’t be helpful in talking about humanities concepts. If the level of understanding you’re looking for is high school or maybe undergraduate, these tools can teach you a lot, and for a lot of people, that’s more than enough. But if your aim is graduate level analysis and output—a level surely included in “almost all cognitive tasks a human can do”—you’re going to quickly be led astray.

The same holds for art. AI can, right now, produce pretty passable mediocre art. Which is a threat to plenty of artists, writers, etc., because plenty of artists, writers, etc., produce mediocre art. I’m pretty confident existing frontier LLM models could come up with an episode of the ABC drama 9-1-1 indistinguishable from the output of that show’s writing room. But, again, “almost all cognitive tasks a human can do” aims a bit higher than 9-1-1

What’s going on is a confluence of two features of Silicon Valley tech bro culture. First, Silicon Valley tech bros believe that they aren’t just skilled at computer programming, but that they are geniuses to a degree that cuts across all disciplines and realms of accomplishment. This is the character trait that ultimately makes Elon Musk so destructive. He doesn’t know anything about, now, the federal government or how its systems works, but he’s convinced of his own genius, and so his uninformed first impressions must instead be the groundbreaking insights needed to really shake things up for the better.

What this feature of tech bro culture means in practice is that if the tech bro finds the AI’s output convincing, then it must be convincing in a cosmic sense. It must be correct to the point of utter dispositiveness, because it feels correct to the uninformed tech bro.

The second feature is a basic lack of taste. That Sam Altman thinks his chatbot’s short story is brilliant tells us much more about Altman’s literary sophistication than it does the nearness of AGI. That tech bros think OpenAI’s Sora video generation model can replace auteur filmmakers says more about their need to watch more episodes of Every Frame a Painting on YouTube than it does about the nearness of Hollywood’s end.

The trouble is, the Silicon Valley tech bro scene is extraordinarily insular and epistemically closed. So they don’t have many people forcing them beyond their intro 101 level understanding of the “cognitive tasks a human can do” in the humanities.

But there’s also an incentive towards exuberant narratives and over-confidence deeply embedded in the business model of Silicon Valley. In many ways, Silicon Valley looks less like capitalism and more like a nonprofit. The way you get rich isn’t to sell products to consumers, because you’re likely giving away your product for free, and your customers wouldn’t pay for it if you tried to charge them. If you’re a startup, and not FAANG, the way you pay your bills is to convince someone who’s already rich to give you money. Maybe that’s a venture capital investment, but if you want to get really rich yourself, it’s selling your business to one of the big guys.

You’re not selling a product to a consumer, but selling a story to someone who believes in it, and values it enough to put money towards it. That story of how you can change the world could be true, of course. Plenty of nonprofits have a real and worthwhile impact. But it’s not the same as getting a customer to buy a product at retail. Instead, you’re selling a vision and then a story of how you’ll achieve it. This is the case if you go to a VC, it’s the case if you get a larger firm to buy you, and it’s the case if you’re talking ordinary investors into buying your stock. (Tesla’s stock price is plummeting because Musk’s brand has made Tesla’s brand toxic. But Tesla’s corporate board can’t get rid of him, because investors bought Tesla’s stock—and pumped it to clearly overvalued levels—precisely because they believe in the myth of Musk as a world-historical innovator who will, any day now, unleash the innovations that’ll bring unlimited profits.)

What this means for AI is that, even if the tech bros recognized how far their models are from writing great fiction or solving the trolley problem, they couldn’t admit as much, because it would deflate the narrative they need to sell.

Roose acknowledges this when he writes, “Maybe we should discount these predictions. After all, A.I. executives stand to profit from inflated A.G.I. hype, and might have incentives to exaggerate.” But that only gets to the second of the two points above. When it’s combined with the first, the lack of deep understanding of domains of knowledge outside their narrow expertise alongside an “I thought of it, so it must be brilliant” perspective, you get a culture where all ideas are big ideas—and all big ideas are unexamined.

If you enjoyed this post, sign up to get all my new posts sent to your email. You can also add my RSS feed to your favorite feed reader, or follow me on Bluesky.

Reply

or to participate.