Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
April 20, 2026
The Dartmouth

Behind the Hype: Professors assess new Anthropic model, implications of AI

Claude Mythos Preview carries cybersecurity risks, but efforts to limit access may lead to a “two-tiered” economy.

041926-aiarticlecourtesy.png

On April 7, artificial intelligence lab Anthropic announced Claude Mythos Preview, a powerful new large-language model which the company claims has found “thousands of high-severity vulnerabilities” across the internet. Anthropic announced that it would not publicly release the model due to security concerns and is forming a consortium of large tech companies — called Project Glasswing — which will use Mythos to patch vulnerabilities in critical software.

To learn more about Mythos and how rapid advancements in AI may affect our society and economy, The Dartmouth sat down with English professor and critical AI scholar James Dobson, engineering professor and economist Geoffrey Parker and computer science professor Soroush Vosoughi.

Is the AI hype real? Is this the most transformational technology since fire?

JD: It’s capable of doing some really interesting stuff. I don’t think it’s fire.

GP: In terms of moving the needle to increasing human life from short, nasty and brutish to a rather more comfortable standard of living that we enjoy today, I’m going with access to abundant sources of energy, clean water and sewage and mechanized and organized farming. I think we’ve seen more transformational advances in the human condition over the last couple hundred thousand years.

SV: It is a transformational technology from an engineering point of view, for sure. It’s less transformational from a scientific point of view: It is not a scientific breakthrough in the same level as the theory of relativity was. It doesn’t really add much to our understanding of intelligence or cognition.

Is Mythos itself a big deal?

JD: Generative AI is quite good at formal, structured objects, so it’s not surprising to me that a new model that is very good at looking at these formal, structured objects could identify bugs. I think it could be a boon for improving software.

SV: I do believe Anthropic that there’s a cybersecurity risk with releasing Mythos. The part I’m more skeptical about — and I’m not saying Anthropic is lying — is that they’re a company, and it was a strategic decision to release that report. I’m sure Mythos is a very powerful model, but it’s not a revolutionary step. It’s not like now we’re in a new generation of AI. OpenAI and Google will almost certainly reach similar capabilities to Mythos in a few months.

So why release it like this?

JD: I’m cynical about Anthropic. Announcing Mythos, forming a partnership with a bunch of companies, gathering together a bunch of business leaders to talk about the fact that they’re vulnerable and exposed, seems like a play to capture more customers for Anthropic. I think the fight over the use of these models by the military directly led to this move by Anthropic. They are keeping themselves relevant by announcing this upfront, getting themselves a seat at the table with major vendors that they wouldn’t have had access to prior.

SV: What Anthropic did is they limited access to only a few companies. That is really moving towards this two-tiered system, where you have certain entities having access to really powerful AI foundational models, and then the rest of us who have access to less powerful models. I’m not blaming them for limiting access — I obviously don’t want my bank account hacked. But it is pushing towards the two-tiered system, and I really am worried about that.

How should the government respond to AI advancements?

JD: The people who have been pushing for oversight and restrictions are the big companies with closed models. Why? They want to stop other modes of development. And I don’t think that generative AI tools are going to enable people to do a whole lot of stuff that they couldn’t have done otherwise. You don’t need a chatbot to learn how to make meth.

GP: You need to have the ability to conduct oversight within some of these technology companies, because you don’t know what the algorithms are doing. There’s an argument to be made that it makes sense to get inside the companies to verify that the algorithms are doing what the organizations say they are.

SV: Research on these models should not be regulated but deploying it in real world settings should be regulated, the same way we do for pharmaceutical drugs. The Food and Drug Administration doesn’t care about what you’re doing research-wise in your university, but if you’re bringing it out as a product, it has to go through tests to make sure it’s safe for consumption.

How will AI affect work? Should Dartmouth students be worried about their own job prospects?

JD: A lot of the entry-level positions that Dartmouth students go to are being cut, trimmed and restructured. I don’t think that going into consulting or investment banking was necessarily good for the world, or good for Dartmouth students. I think people would be okay with finding other jobs.

GP: If you’re super pessimistic, then these new technologies are incredibly dangerous. If you are reasonably pessimistic, then you worry about the workforce and large scale unemployment. If you’re an optimist, then you say, “Well, what has always happened in the past is that every technological change has had some disruption, and then people have always ended up figuring out how to integrate the tech and it hasn’t really affected employment.”

SV: AI will almost certainly cause some volatility; it already has in the job market, as it adapts. But I do think that it will normalize. The number of jobs will not go down — it might even go up — but the nature of the jobs will be different. Optimistically, AI will actually make jobs more interesting in that a lot of the menial work that people will be doing now will be done by AI and you can focus on more creative tasks. Pessimistically, it will be the opposite, where AI will do a lot of creative work and then you’ll just be a prompt engineer.

What do people get wrong about AI?

JD: These are neural networks that are trained to do predictions. Even though we don’t know exactly how they’re functioning or making the decisions that they’re making, we have a pretty good understanding of what they are. There’s no superhuman intelligence baked into these things. There’s no agency baked into them. These are extremely large neural networks that are basically evolved versions of what existed in the 1950s.

GP: What we get wrong is being overly confident that any one scenario is the one that will dominate, and what I hope would be more of the conversation is to think about no-regrets regulatory policies you could put in place to protect against potential futures. For example, China entering the World Trade Organization effectively took out a lot of the manufacturing companies in the Upper Midwest, and 25 years later, we’re still dealing with the fallout. There was no coherent effort to defray the impact on a relatively small set of people. We’ve got lots of potential for disruption with AI, and we should be getting ready for it.

Interviews have been edited for clarity and length.