I’m posting this both to danreetz.com and futurepicture.org — future posts will not be double-posted in this manner. Be sure to subscribe to the RSS feed over at Futurepicture, there’s much more in store. This is, in fact, just a teaser…
FuturePicture is about the future of photography. It is about cameras with capabilities that sound like science fiction, and look like a million bucks.
So you want to influence the future of photography? Well, you gotta build a camera.
And that’s exactly what Matti and I did. Twice.
First Large Light Field Camera Array:
Second Large Light Field Camera Array:
Computational cameras have only come into being over the last two decades. Why just now? Well, cheap computation, plentiful sensors, and a hundred-fifty years of relative design stagnation explain some of it. Computational photography is a young field, still deciding what it is and what it is doing, exactly, but the undeniable common factor is that a powerful camera is involved. This “camera” could look perfectly ordinary or be completely unrecognizable, understandable only by analogy, from a fly’s eye to the photosensitive spots on nematodes. Computational photography seeks inspiration from disparate sources: biology, computer vision, optics, and statistics. The price of admission is math prowess, some computer programming power, and a camera. Or twelve.
Well, together we (Daniel Reetz and Matti Kariluoma), have that covered. We aim to take computational photography out of the lab, and into practical use. We want to make the hardware affordable and accessible, because outside the ivory towers of academia, there are creative people of all stripes who could use amd abuse this kind of photographic power.
So, what does this thing do? The primary function of this array is to capture the Light Field, a four-dimensional function that is capable of describing all rays in a scene. Surrounding you, now, and always, is a reverberating volume of light. Just as sound echoes around a room in complex ways, bouncing from every surface, so does light, creating a structured volume. Traditional, single-lens cameras project this three dimensional world of reflected light onto a two dimensional sensor, tossing out the 3D information in the process, and capturing only a faint, sheared sliver of the actual light field. By taking many captures at slightly shifted locations, it is possible to capture a crude representation of the light field. The number of slices determines the resolution of capture; our 12 captures at 7cm separation is a bare minimum. What can you do with a light field? The lowest hanging fruit is computational refocusing. By computational refocusing, we mean focusing the image AFTER it is captured.
The particular method of computational refocusing that we employ creates an enormous virtual aperture. The size of the virtual aperture determines a few things. One, the aize of the object you can “see through”. Two, the depth of the focal plane, which is currently extremely shallow, on the order of a few centimeters at most. In this image, we can see right through Poodus as he flies through the air.
Camera array construction and software will be the topic of another post; this post is just to introduce our work on the array and make public some of its output. A brief summary: we employ the latest modern rapid prototyping equipment — laser cutters, flatbed scanners, digital micrometers, and open source hardware and software — Arduino and StereoDataMaker. All the technology we develop will be released under open-source licenses to encourage, as much as possible, the development of similar camera arrays and to speed the hobbyist adoption of computational photography techniques.
A brief introduction: Daniel Reetz is an artist, camera hacker, and graduate student in the visual neurosciences. Matti Kariluoma is a CS/Math major with a focus on artificial intelligence. Together, we’re working on computational photography, and we’re going to bring our respective backgrounds to bear on it. Want to get in touch? Leave a comment here.