Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
December 12, 2024 | Latest Issue
The Dartmouth

Elliott: OpenAI’s Mira Murati has it all wrong

We must question the Thayer alumna’s controversial comments and carefully consider Dartmouth’s duty as a herald of innovation.

On June 8, Dartmouth Engineering hosted OpenAI Chief Technology Officer Mira Murati Th’12 for a conversation about generative artificial intelligence and the future of the technology. 

Around halfway through the discussion, moderated by Trustee Jeffrey Blackburn ’91, Murati made a comment that turned some heads: “Some creative jobs maybe will go away, but maybe they shouldn’t have been there in the first place.”

This moment is embarrassing, disappointing and educational — and underscores that we need to be very careful when listening to AI industry leaders speak.

Murati’s statement demonstrates an incredible lack of self-awareness. OpenAI’s technology is quite literally built on the work of people in “creative jobs.” Its models have analyzed hundreds of millions of images, lines of text and seconds of video — created by real people — in order to respond to your queries like real people might.

A GenAI architect suggesting that some creative jobs shouldn’t have existed in the first place is like a person saying they’d be better off if their parents were never born.

Now, are these technologies really getting good enough to supplant human creatives or even work reliably alongside them? Take OpenAI’s Sora model, which can generate videos of a single subject, up to a minute in length. The demos look good, but they don’t reveal how much of a nightmare this software is to use.

In an interview, users described the product as “a slot machine.” You can input the same prompt twice in a row and get completely different outputs. Expressed differently, you can’t create scenes! You can’t cut back and forth in a consistent location, with consistent characters, in a consistent light. And even then, every shot has to be painstakingly touched up with visual effects.

Okay, how about OpenAI’s flagship product, ChatGPT? In a 2022 essay, Murati asks GPT-3 to “make a Pablo Neruda poem as an ode to Planck’s equations.” The result sounds nothing like Pablo Neruda. It is not humming with the breath of an idea. The poem I generated by giving the same prompt to GPT-4o — a model supposedly more sophisticated than its predecessors — is even worse. It uses the tired figurative “dance” three times in just a few lilting quatrains, which evoke nothing of Neruda’s free verse.

Perhaps the technology hasn’t reached its full form yet. After all, these models went from producing relatively incoherent babble to scoring a 1410 on the SAT in just a few years. At this rate, they must be a matter of months away from perfection, right? 

Past performance is not necessarily indicative of future results. There is growing skepticism that generative AI models will be able to maintain the rapid pace of sophistication that we’ve come to expect from them. Without major breakthroughs in energy generation and computing power, we will reach the limits of transformer architecture. Further, the models are running out of viable training data. 

Yet, Murati claimed at Dartmouth that OpenAI’s GPT will reach “PhD-level” intelligence “soon.” When she said this, I badly wished that Blackburn would step in and ask the basic follow-up question: What does reaching “PhD-level intelligence” even mean? It apparently means the model can score higher than it currently does on a test consisting of 448 “Google-proof” multiple-choice questions. But you don’t need to know that. You just need to hear “PhD-level intelligence” and your jaw needs to hang open.

Murati and other AI leaders are featured relentlessly on futurist panels, during which they keep their audiences focused on what AI might someday be able to do — so that we’ll spend less time considering what the technology actually does right now.

When it comes to Sora, Murati has been vague about when the video generation tool will be useful. At the June 8 Dartmouth event, Murati also said that “it’s actually very, very difficult to commercialize AI technology.” She asked, “Why is it so hard for these really amazing, successful companies to actually turn this technology into a helpful product?” If I were Blackburn, who has an MBA from the Stanford Graduate School of Business, the hair on my arms would stand up. OpenAI has built a solution without a problem. If these companies are so good at being successful, then why haven’t hundreds of companies developed thousands of use cases for these tools, and why are Google and Amazon quietly cooling GenAI expectations?

Later in the event, Murati remarks that she is often asked of OpenAI’s technology, “What is it good for?” She replies: “Everything. So just try it.” If Murati was hawking miracle elixirs and said these words, we would laugh. But when the wares are AI technologies, which are every bit as opaque to the non-technical consumer as the products of alchemy, we smile and applaud and ask how many shares of Nvidia we can buy immediately.

When we hear AI industry leaders speak, we must remember they are speaking in both an educational and a sales capacity. And if they’re not directly selling, they’re buying time until their product is theoretically good enough to sell. These people are deeply invested in a highly-speculative, hype-driven bubble.

Let me return to the quote that kicked off this discussion, because Murati adds a caveat: “Some creative jobs maybe will go away, but maybe they shouldn’t have been there in the first place if the content that comes out of it is not very high quality.” Listen carefully. She is conflating job performance with the jobs themselves. This rhetorical sleight of hand is consistent: Murati is still in a protective, sales mode — setting out cushions so that potential layoffs will be easier to rationalize.

Now, let’s say GenAI progresses at a remarkable rate. What happens if its creative outputs are suddenly a fair bit richer? Creative destruction is a natural economic process, but we have an important aesthetic decision to make here: Do we want our most creative functions to be destroyed and replaced by computers?

Let’s examine a window into Murati’s view of human creativity. In her 2022 essay, she writes, “GPT-3 reaches answers based on the patterns it identifies from the existing usage of human language creating a map, the same way we might piece characters together when writing essays or code.” I hope this is not how you write essays. This is the problem. AI can do sentiment and pattern recognition, but it cannot really capture a writer’s voice. Because, although your voice is somewhat predictable, the things you write are not necessarily based on everything you’ve written before.

Murati declares that “a neural network with good enough next character or word prediction capabilities should have developed an understanding of language.” But “understanding” is a deceiving word choice. Large language models don’t actually understand anything, at least not how humans do. They catalog patterns and frequencies from the material they consume, and, roughly, when this process is reversed, they can “create” similar material. 

The products of GenAI are something like an average of the relevant human creations. But we want extraordinary things out of art; we want art to explore the recesses of our hearts and depict the strangeness of our vision. Consuming human-created work makes us feel a little less alone, because it reassures us that someone else also sees what we see.

Murati claims that GenAI will “expand our creativity and imagination.” But really, using AI replaces, not expands, imagination. Imagination is a muscle, and every time you refuse to exercise it, it atrophies.

When speakers like Murati are brought to campus, there is a great tension between Dartmouth-the-Academic-Institution and Dartmouth-the-Brand. It is, of course, in Dartmouth’s interest to promote its brightest stars, but it should do this in alignment with its academic mission. It is Dartmouth’s duty to engage critically — to ask industry titans what they really mean, interrogate what their technology will really do and wonder if their products are really what society needs. Being an active part of the conversation is not what Blackburn was doing. Dartmouth should not only teach us to become economic mavericks and citizens of the world, but also good, complete humans who can express ourselves clearly, think critically and imagine wildly. 

If Dartmouth is so proud of its legendary alumni in “creative jobs” — Jake Tapper ’91, Shonda Rhimes ’91, Louise Erdrich ’76 — why does the College so unflinchingly carry water for the technologies that will leave less room for talents like them to emerge in the future?

I believe that Murati is a brilliant Thayer alumna, but we disagree about the nature of creativity. I’m glad she came to speak, but I won’t take her vision for the future as gospel — and you shouldn’t either.

Even in your personal life, instead of asking a machine to squirt out a “roses-are-red” poem for your significant other, write one yourself. It will be better, it will expand your imagination and you’ll fall more deeply in love.

Will Elliott is a member of the Class of 2025. He formerly worked for the multimedia section of The Dartmouth but is no longer involved in the organization. Guest columns represent the views of their author(s), which are not necessarily those of The Dartmouth.