How Does Lytro Capture Light Fields for Virtual Reality?

Segment the human experience of light apart from the objective behavior of light unseen from our universe, and try to imagine seeing light in both its particle and wave form at the same time. In case you are struggling, here is a snapshot of the behavioral duality exhibited by light, captured as both a waveform and a stream of particles.

A research team from Fabrizio Carbone at EPFL used electrons to image light’s dual nature in 2015. (Image courtesy of Phys.org.)
The brain’s interpretation and computation of “light information” begins when light enters through our corneas, which refract and bend the light. It passes through the watery compartments and goes through the pupils. The crystalline lenses then refract the light, while the ciliary bodies contract to bend the shape of the lenses. Rods and cones transform light information into electrical signals, which are sent along the optic nerves to the visual cortex in our brains.

Over the past 200 years, cameras have evolved just as rapidly from analog to digital as they did from large to miniaturized. Now the world’s virtual reality and augmented reality enthusiasts are attempting to create more immersive experiences by altering and improving the way a physical environment is captured digitally. Capturing a physical environment digitally requires a 3D scanning system. The specific considerations needed for creating a digital version of an as-built model for virtual reality depend on the current technological limits of photorealistic reality capture.

A San Francisco–based company called Lytro has designed and constructed a light-field camera and developed an array system it calls Immerge to capture, compute and create an immersive virtual reality experience of a musical performance at St. Ignatius Church.

A light-field camera is designed to capture light from different angles to make images with depth and color, calculated from intersections of different angular directions of rays. Using an array of cameras set up in a predesigned capture matrix, each can be programmed to “see” different perspectives—exposure, shutter timing, focal length and position all carefully measured and quantized sequentially.

The creation of a multi-camera, array-based system requires the expenditure of considerable time and capital. But Lytro has developed an individual camera called Illum apart from Immerge. This array-based system of light-field technologies allows Lytro to capture a light field, calculate ray angles and then manifest a virtual representation for interactive immersion.

This incredible array of 475 cameras called Immerge captured and processed a huge amount of visual data using Google’s cloud platform and custom rendering techniques designed by Lytro. (Image courtesy of Lytro.)
Light rays pass through the lens and aperture of the each Lytro camera, along with their color characteristics, angular directions and intensity. An array of microlenses gathers captured rays into image discs. Each image disc covers its own part of the sensor, and bundles of light rays are gathered up. The bundles are then processed by a computational engine within the hardware, using a geometric model to create accurate light behavior in virtual reality.

To learn more about this technology and the virtual reality capture at St. Ignatius Church, visit the Lytro blog.