This article is featured in the 2025 Homecoming Special Issue.
As generative artificial intelligence rapidly expands, digital artists and filmmakers are presented with challenging questions about authorship, originality and how to embrace — or eradicate — AI in media and film. The Dartmouth sat down with the chair of the film and media studies department Roopika Risam to discuss the consequences of AI for artists both on campus around the world.
AI challenges traditional notions of authorship. Is the use of AI considered a collaboration, your own work or the work of someone else? Who do you think gets to be called the “artist” when AI is used in the creative process?
RR: I think the very notion of authorship is inappropriate to materials generated with commercial generative AI platforms. One of the things that’s important to think about is that AI is so many things. When people say “AI” now, they’re usually referring to generative AI — things like ChatGPT, Claude, Gemini and those commercial products. For a long time, there have been many uses of AI, machine learning and natural language processing. If you’re using those tools, I think that there’s a real question there about whether one can claim to be an artist. Yes, you have a vision, you have an idea and then the tool executes. And history is full of us having ideas and using tools to execute them. But when the tool is drawing on this sort of vast repository of material and you’ve not had any input, and they are potentially violating the intellectual property rights of other people, I think it really forces us to ask ourselves what it means to be an author or an artist.
How do filmmakers and media artists use AI, from script writing to animation to editing? Where do the lines get blurred between originality and plagiarism, especially if generative AI doesn’t give credit to where the original material is sourced?
RR: I very much care about intellectual property rights. There are humans who make art. They make art in front of a camera, behind a camera, in a darkened room where they’re color correcting. And so much of that work can be automated. And when you automate it, what happens to the human? This is a fear that every technology over much of our history as a species has created; we can automate something, or we can make something faster than a calculator can do it. And then we think: Oh my gosh, we’re not going to hire people to add by hand anymore. So this is not a new worry. But I think that the scale that these technologies make possible, makes it something we really need to be thinking about because there’s a long-term consequence.
Given that AI models are trained on data and outputs based on statistical models, are there implications leading to distorted or biased results? What are the implications of that for media, visual culture and representation?
RR: Let’s look at a specific example of some of the data that’s trained large language models, in this case, for textual output. That would be Wikipedia and Wikidata — that’s a huge amount of data to train a model with. At the same time, we already know that Wikipedia’s policies around notability and verifiability skew its content towards white, male, U.S., Canada, Western Europe content and also English language content — although there are other language Wikipedias. There is a lot of data introducing bias just into the training. So really, what you’re going to get out is based on what goes into it. It’s relying on a sort of textual material that exists. That textual material, naturally, is statistically a representation of the dominant culture, wherever it is. That’s where we would start to see biases and stereotypes.
Do you see generative AI tools being adopted in Dartmouth’s film and media curriculum? Is there a desire to eradicate AI from student work or a push to embrace it as part of the creative process?
RR: I’m speaking for myself — not the department. Unless you have a substantial knowledge base, then you’re not going to be able to even use those technologies. They can be really useful and effective tools, but only if you already have that basis of knowledge. And our focus is on giving students that basis of knowledge, so that if this is something they choose to use, or if the technologies develop in a way that it would make sense for it to be something that becomes part of coursework, that it would be only with that base of knowledge.
How can Dartmouth foster critical literacy around AI so that future artists, media makers and filmmakers use it thoughtfully rather than blindly? How do you think we can foster an open dialogue about both the consequences and potential use of AI?
RR: Dartmouth is really unique in that there are quite a number of us here whose work is really thinking about the consequences, unintended or otherwise, of technology. I think that Dartmouth has a really unique opportunity to be a place where we do have these conversations. Certainly it begins at the department level and really thinking about in the context of this department, our disciplines, the majors we offer. Where is AI useful, if at all? How do we gauge against the kinds of expectations for the knowledge that our students will have when you enter the workforce? But then, how do we ensure that, first and foremost, they have the knowledge? Ultimately what would be nice is for every professor to be able to articulate clearly to students what is permissible in the context of a class, but more importantly, why? I feel like one can make a compelling case that you need to understand the foundations of computation to be able to know what happens when you ask ChatGPT. You need to know the history of a topic, so that when ChatGPT gives you something, your brain says, “wait, that’s not right,” and you never get there if you just start with that technology.
This interview has been edited for clarity and length.



