Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
March 28, 2024 | Latest Issue
The Dartmouth

Q&A with engineering professor and Technology and Engineering Emmy winner Eric Fossum

Fossum discussed his research career and the journey of winning the Emmy award.

Fossum_Courtesy.jpg

In November, the National Academy of Television Arts and Sciences awarded Dartmouth engineering professor Eric Fossum at the Annual Technology and Engineering Emmys for his pioneering work on a pixel image sensor that is now widely used in many cell phone cameras and webcams. Fossum received his Bachelor of Science degree in physics and engineering from Trinity College and his Ph.D. and Masters of Engineering and Applied Science at Yale University. He has also worked at the NASA Jet Propulsion Laboratory and founded two tech companies, Photobit and GigaJot. In addition to his teaching and research, Fossum is the director of Dartmouth’s PhD Innovation Program and vice provost for the office of entrepreneurship and technology transfer. Fossum’s research has led him to receive several other prizes, including the Queen Elizabeth Prize for Engineering. The Dartmouth sat down with Fossum to discuss his career path and the significance of his research. 

Your research focuses on solid-state image sensors, such as the complementary metal oxide semiconductor active pixel sensors. How did you initially get involved in this?

EF: Right now my work is on a next-generation sensor. But the CMOS image sensor technology that you find in your smartphone today is something that came out of my work at the NASA Jet Propulsion Laboratory in California, back in maybe 1992 or 1996. We were trying to shrink the size of cameras on interplanetary spacecraft, which are really quite huge because of how much technology that we’re using. We invented this new technology camera on a chip, which allowed us to shrink the size of cameras on spacecraft and rovers on Mars and all those places — so that was the genesis of the technology. But after we did that, we figured, ‘Hey, this is pretty good for planet Earth.’

Can you expand on the Emmy award you received? What is the significance of the sensor? 

EF: The Emmy award is for contributions to television, arts and sciences. And obviously, I’m more on the science side. And that’s because the CMOS image sensor technology is not only used in smartphones but is used in all kinds of television production equipment, including newsgathering by citizens through their smartphones. If you think about social justice, for example, you know how important it is that we all have a camera in our pocket. So the Emmy recognizes the contribution of this technology to making cameras a lot smaller, and of course, better and making citizen journalism possible.

How did you go about the research for this award? How long have you been working on it?

EF: Once we decided the sensor technology was good for planet Earth, we actually started a company to commercialize the technology; it was called Photobit. We further developed the technology-made products and they went into some very early webcams and some prototype smartphones. But then, our company was acquired by a bigger semiconductor company, called Micron. Also early on at JPL, we worked with Kodak, trying to commercialize that technology as well. So we transferred the technology from JPL, which is part of Caltech, to Kodak. The winners of the Emmy awards that were announced were me as the primary inventor, as well as Kodak. I was also very happy that we were able to get some extra trophies for some of my other team members of PhotoBit.

Now we have moved on to a new technology at Dartmouth and my research lab, which is an even more sensitive camera chip called a quanta image sensor.

How does the quanta image sensor work? Is this a recent development? 

EF: It counts individual photons of light one at a time. I started it probably about five years ago, when we were doing a lot of research. But then my Ph.D. students and I co-founded a startup company out of Dartmouth called GigaJot, which is now operating in southern California. So now we’re again trying to commercialize that technology, and maybe it’ll make it to your smartphone someday. We’ll see.

What do you love about doing research? 

EF: Well, research is, boy, it’s just fun. It doesn’t always lead to success. There’s that old game, Chutes and Ladders. Sometimes you go up the ladder, you roll the dice and go up the ladder in advance towards the goal. Or sometimes you fall into a chute and go backward in the game. Research is kind of that way too —  three steps forward two steps back. So you have to have a kind of thick skin and realize not everything you try is going to work. But when it does work, it feels really good. When you can actually impact society through the technology that you’ve created through your research, it’s really a fantastic feeling.

What courses have you taught at Dartmouth? And how long have you been at Dartmouth? 

EF: I’ve worked at Dartmouth since 2010. I teach an undergraduate class on solid-state devices which is running winter term. I also teach classes in technology, innovation and entrepreneurship and I am the director of the Ph.D. Innovation Program, which is mostly for Ph.D. students to learn how to commercialize technology that they create or invent as part of their research.

What are you looking for regarding research in the future, specifically in image sensors?

EF: Our photon counting sensor clock and image sensor still have their own limitations. I’m trying to understand the semiconductor device physics behind the background noise in these devices because it limits the perfection of the sensitivity. So I want to understand what those limits are all about. It’s really pushing the forefront.

What do you think could be the next major development in the field? 

EF: I think that a lot of camera technology is going towards computational imaging, where you do a lot of image processing through coding, essentially. And AI, to try to guess at what the missing parts of the image might be, or how to see in the dark even better.

This interview has been edited and condensed for clarity and length.