This Nokia N95 smartphone has a 5-megapixel camera with Carl Zeiss optics. The phone also contains a 2nd video camera and 5 radios: cell, WiFi, Bluetooth, GPS, and FM! | The cell phone at left took this picture, shown here at reduced resolution. Can your point-and-shoot camera do any better? Here is an album of nature pictures taken by the N95, and one of the Stanford campus. | The boat harbor doesn't belong in this picture. It was found on the Internet and inserted into this photograph. For details, see this SIGGRAPH 2007 paper. | By inserting a microlens array into a handheld camera, one can create a plenoptic camera, which can record a light field in a single snapshot. | The photographs produced by this camera can be refocused after they are captured. Click above for an example of digital refocusing. |
|
|
Computational photography refers broadly to sensing strategies and algorithmic techniques that enhance or extend the capabilities of digital photography. The output of these techniques is an ordinary photograph, but one that could not have been taken by a traditional camera. Representative techniques include high dynamic range imaging, flash-noflash imaging, coded aperture and coded exposure imaging, photography under structured illumination, multi-perspective and panoramic stitching, digital photomontage, all-focus imaging, and light field imaging.
Stanford has offered a course on computational photography since 2004. This year, the course will focus on computational photography using mobile computing platforms, i.e. cell phones. The cameras in these devices have been improving in resolution, optical quality, capabilities, price, and popularity. They are already eating into the bottom of the point-and-shoot camera market; within a few years these two markets may merge. Moreover, camera phones offer features that dedicated cameras do not - wireless connectivity, powerful processors, a high-resolution display, 3D graphics, and high-quality audio. Finally and perhaps most importantly, these platforms run real operating systems, and some of the manufacturers seem willing to open their platforms to software development by third-party developers and the academic community. In this one-time seminar course, we will survey the rapidly converging technologies of photography, digital imaging, and mobile computing.
The course is targeted to both CS and EE students, reflecting our conviction that successful researchers in this area must understand both the algorithms and the underlying technologies. Most classes will consist of a lecture by one of the instructors. These lectures may be accompanied by readings from textbooks and the research literature. These readings will be handed out in class or placed on the course web site. Students are expected to:
We are encouraging but not requiring students to implement their course project on a mobile computing platform. For this purpose, as part of a research collaboration between Nokia and Stanford (called Camera 2.0), Nokia has agreed to loan every student in the course an N95 (see picture above), which is not yet widely available in the United States. The student or team who turns in the best project implemented on an N95 will be allowed to keep their phone(s). Students may also choose to implement projects on the Stanford Multi-Camera Array or a "FrankenCamera" we are building in our research laboratory to facilitate research on computational photography.