Hyperspectral Camera Sees Beneath the Surface

Hyperspectral image cameras are used across diverse industries including satellite imaging, agriculture, mining, energy monitoring and infrastructure or food safety inspections. 

The high cost of this technology, usually between several thousand to tens of thousands of dollars, has limited its use to industrial and commercial applications.

But what if this technology was available at the consumer level?

HyperCam is a low-cost hyperspectral camera developed by UW and Microsoft Research that reveals details difficult or impossible to see with the naked eye. (Image courtesy University of Washington.)

Enter the HyperCam, a low-cost hyperspectral imaging camera that takes images in both visible and near-infrared light to “see” beneath surfaces and reveal unseen details.

In a paper by computer science and electrical engineers at the University of Washington, the team details a hyperspectral camera hardware solution that costs about $800 and potentially as little as $50 for a version that can be added to a mobile phone camera.

They also developed intelligent software that finds the “hidden” differences between what the camera captures and what is seen by the naked eye.

A typical camera images in a combination of red, green and blue visible light bands. By also using multiple wavelengths from across the electromagnetic spectrum, otherwise invisible details can be imaged.

For example, near-infrared cameras can reveal whether crops are healthy or a work of art is genuine. Thermal infrared cameras can visualize where heat is escaping from leaky windows or an overloaded electrical circuit.

The HyperCam uses both visible and near-infrared parts of the spectrum, illuminating a scene with 17 different wavelengths and generating an image for each.

Challenges of Data Volume and Bright Light

One of the challenges to this technique is sorting through the high volume of image frames the camera produces.  

The UW software is designed to analyze the images and find the ones most different from what the naked eye sees, essentially zeroing in on images that the user is likely to find most revealing.

“It mines all the different possible images and compares it to what a normal camera or the human eye will see and tries to figure out what scenes look most different,” said lead author Mayank Goel, a UW computer science and engineering doctoral student.

One of the other challenges is that the technology doesn’t work well in bright light.  

The team plans to focus future research efforts to addressing that problem, as well as developing a version of the camera that is small enough to incorporate into mobile phones and other consumer devices.

“It’s not there yet, but the way this hardware was built you can probably imagine putting it in a mobile phone,” said Shwetak Patel, a professor of computer science and electrical engineering at the UW.

Consumer Applications

The UW engineers, working with a team from Microsoft Research, wanted to determine if they could design a relatively simple and affordable hyperspectral camera for consumer use.

The team sees applications at the consumer level for uses related to biometrics and food safety and chose these two uses as directions for their research.

When the HyperCam captures images of a person’s hand, for example, the image shows fine details of skin texture surface patterns, as well as the pattern of veins underneath the skin. 

Both patterns are unique to the individual, which led the team to see applications related to gesture recognition, biometrics and identification.

Compared to an image taken with a normal camera (left), HyperCam images (right) reveal detailed vein and skin texture patterns that are unique to each individual. (Image courtesy University of Washington.)

As a preliminary investigation into the HyperCam as a biometric tool, they imaged the hands of 25 different people.  The team reported that the system was able differentiate between hand images of individuals with 99 percent accuracy.

The second test involved imaging fruit to determine ripeness or the start of rotting unseen beneath the skin of the fruit.

The team took hyperspectral images of 10 different fruits, from strawberries to mangoes to avocados, over the course of a week. 

The HyperCam images predicted the relative ripeness of the fruits with 94 percent accuracy, compared with only 62 percent for a typical camera.

Images taken with HyperCam predicted the relative ripeness of 10 different fruits with 94 percent accuracy, compared with only 62 percent for a typical (RGB) camera. (Image courtesy University of Washington.)

Imagine grocery shopping with technology like the HyperCam in something as ubiquitous as a smartphone, enabling shoppers to check the quality and ripeness of fresh product before buying. 

This could potentially lessen wastage when otherwise unseen over-ripeness results in produce going bad before it can be used.

Existing Industries Will Benefit, Too

Of course, consumers wouldn’t be the only ones to benefit from the HyperCam technology. Industries that already use the current, expensive technologies will likely be eager to embrace a smaller and more affordable hyperspectral imaging camera.

We may see more satellites equipped with these camera systems due to the lower cost and smaller weight.

Small farmers could have the ability to scan their crops for plant health or disease outbreaks as easily as carrying their cell phone out into the fields.

Mining and mineralogy surveying wouldn’t need to risk damage to expensive imaging equipment in excavation sites and harsh environments.

We may even see handheld chemical and environmental emissions scanners available for anyone to use to check their surroundings.

Or will this technology be adapted for surveillance applications, potentially able to identify individuals based on minute details in skin texture or other patterns revealed from beneath the skin?

For details on the HyperCam, the team’s paper is available here.