On Saturday, January 25, Stanford Professor Marc Levoy will be presenting a talk titled “What Google Glass Means for the Future of Photography” at LACMA. The event is presented in conjunction with the exhibition See the Light—Photography, Perception, Cognition: The Marjorie and Leonard Vernon Collection. In anticipation of the event LACMA's Elizabeth Gerber asked him a few questions for Unframed.
For people not familiar with Google Glass, could you provide a short description?
Some people describe Glass as a cell phone you wear on your head. Like a cell phone, Glass has a display, camera, touchpad, motion sensors, radios, and a plug for charging its battery. I prefer to think of Glass as a new kind of digital assistant. In fact, one typically tethers Glass to a cell phone, and the two work together.
Glass can do things a cell phone can't, such as taking a picture just by winking, or sending a text message while driving without taking your hands off the wheel or your eyes off the road. And Glass can't do everything a cell phone can, like sharing a funny video with a friend. (Only one person can look through Glass at a time.)
Google Glass has been getting increased media attention this past year. What do you find most exciting about Google Glass?
I'm giving this talk as a Stanford professor, not a Googler, so don't expect a marketing pitch for Glass. In my view, the project's goals were to produce a device that is lightweight enough, unobtrusive enough, fashionable enough, and useful enough, that one would wear it all day. Does it fulfill these goals? Not yet, but it's getting better with each new release. (And I do wear it all day, partly because it's also a comfortable pair of prescription sunglasses.)
Everybody who uses Glass has their own favorite feature. My specialty at Stanford is computational photography, and the feature I find most exciting about Glass is its ability to take high-quality, first-person, point-of-view pictures and videos on the spur of the moment—whenever you see something worth remembering. How Glass changes the game for photography is what I'll be talking about on Saturday.
You teach a digital photography class at Stanford and photograph during your free time. Are there specific photographs or photographers that are inspirational to you?
I teach both the art and science of photography, which I suppose comes from my mixed background in architecture and computer science. So I'm particularly inspired by photographers who deeply understand photographic technology, and by photographs that push the boundaries of what the human eye can see.
Ansel Adams produced beautiful photographs of nature, but he also wrote a three-volume treatise on photographic technique. If he were alive today, I'll bet he would embrace digital photography.
Harold Edgerton—the father of high-speed photography—is another favorite of mine. He pioneered strobe illumination, and he took some of the most striking and beautiful pictures ever captured. (Think of the milk-drop corona, or the bullet passing through an apple.)
The exhibition See the Light explores parallels between the history of photography and the history of vision science. In your view, what is next in those two fields in the next 10 years?
The principles of photography have remained largely unchanged since its invention by Joseph Niépce in the 1820s. A lens focuses light from the scene onto a photosensitive plate, which records this information directly to form a picture. Because this picture is a simple copy of the optical image reaching the plate, improvements in image quality have been achieved primarily by refining the optics and the recording method. These refinements have been dramatic over the past few decades, particularly with the switchover from film to digital sensors, but they’ve been incremental.
Computational photography challenges this view. It instead considers the image the sensor gathers to be intermediate data, and it uses computation to form the picture. Often, it requires multiple images to be captured, which are combined in some way to produce the final picture. Representative techniques include high-dynamic-range (HDR) imaging, flash/no-flash imaging, coded-aperture and coded-exposure imaging, photography under structured illumination, multiperspective and panoramic stitching, digital photomontage, all-focus imaging, and light-field imaging.
As the megapixel wars wind down, camera companies will begin competing more and more on whatever fancy (and useful) computational photography features they can fit into their devices. This revolution has just begun, and it will completely transform photography over the next generation. Except in photojournalism, there will be no such thing as a "straight photograph"; everything will be an amalgam, an interpretation, an enhancement, or a variation—either by the photographer as auteur or by the camera itself—under manual control or fully automatically.
We'll also see increased experimentation at the boundary between still photographs and videos. Think of cinemagraphs, or Vine's six-second video clips, or Harry Potter "talking pictures." Lots of people and companies are experimenting in this space. Some of these experiments will be successful, a few will be beautiful, many will be useful, and some will take advantage of wearable devices like Glass. The fun has just begun.
Marc Levoy is the VMware Founders Professor of Computer Science at Stanford University.
Click here for more information about this talk.
Elizabeth Gerber, Education and Public Programs