Overmatch Capability for the Warfighter: Thermal Vision and Infrared Imaging

US Secretary of Defense James Mattis has strongly stated, “I am committed to improving the combat preparedness, lethality, survivability and resiliency of our nation’s ground close-combat formations.” While the US Military still maintains an edge among the world powers in terms of military might, the gap has closed compared to decades previous. To achieve “overmatch” capability, today’s military must not only continuously improve its talent, it must also modernize equipment and technology.

One area of technology that the military is modernizing for overmatch capability is its night vision technologies—specifically, its ability to fight in low or no light. While traditionally the warfighter has relied on Image Intensified technology, and this technology continues to be key, there have been and continue to be significant technological improvements in soldier systems utilizing infrared sensors in multiple wavelengths.

Image Intensified Devices: How Does Non-Infrared Night Vision Work?

Traditional Image Intensified (I2) devices convert very low levels of visible light photons into electrons, amplify those electrons, and then convert the electrons back into photons of light. Photons from a low-light source enter an objective lens which focuses an image onto a photocathode tube. The photocathode releases electrons through the photoelectric effect as the incoming photons bombard it.

The electrons accelerate through a high-voltage potential into a microchannel plate (MCP). Each high-energy electron that strikes the MCP causes the release of many electrons from the MCP in a process called secondary cascaded emission. The MCP is made up of thousands, or even millions, of microscopic conductive channels, angled to enable more electron collisions and allowing the emission of secondary electrons in a controlled electron avalanche. Essentially, it’s a light amplifier, but it requires at least some light to function.

In this diagram of an I2 device, photons from a low-light source enter the objective lens (left) and strike the photocathode (gray). The photocathode releases electrons through the photoelectric effect. The microchannel plate (red) has the effect of multiplying the number of electrons via secondary cascaded emission. These electrons then strike a phosphor screen (green) which produces photons of light, which are viewed through the lenses on the right side. (Diagram source: Wikimedia commons.)

Image Intensified Devices – Benefits and Drawbacks

I2 devices are workhorse sensors for the US warfighter and are utilized in nearly every branch of the military. Low light intensified devices offer excellent night vision imagery under near-zero light conditions, have effectively up to 13-megapixel resolution, are relatively low cost and consume very little power, which allows systems to operate for very long periods on small battery cells. However, the drawback is that the tube-based devices are large and bulky, and they are analog which makes them challenging to adapt to the digital battlefield.

While there is significant work being done on digital low light sensors, there is substantial work to be done to achieve tube-based performance with low power and cost. This may be untenable in the near future. Currently, I2 devices are used by aviators, special forces and the army in several configurations including monocular goggles, fused with thermal (i.e. Enhanced Night Vision Goggles), weapon sights, and more.

Infrared Sensors

Sensors in the infrared spectrum continue to evolve, and are contributing significantly to the warfighter’s overmatch capability.

Infrared sensors, and specifically those that are thermal sensors, have been a key technology for the military for decades; however, limitations in the technology have been an obstacle to widespread usage.

Sensors have traditionally been large, power hungry and expensive. As a result, they were used only for the most demanding mission requirements, such as fire control or very long-range surveillance/targeting. However, recent advances in the technology have allowed wider adoption of infrared sensors into many more missions, and as the technology evolves further this could prove to be a critical component for overmatch capability.

Short, Medwave and Longwave Infrared Sensors

Infrared sensors are typically broken up into categories based upon the wavelengths that they detect. The infrared spectrum is adjacent to the visible light spectrum, and wavelengths in visible and the infrared travel at the speed of light. However, there are significant differences between infrared energy and visible light energy (what our eyes see.)

The electromagnetic spectrum, showing the wavelength ranges of SWIR, MWIR, and LWIR radiation. (Image courtesy of FLIR)

For the most part, the infrared spectrum is broken up into near infrared (NIR), shortwave infrared (SWIR), midwave infrared (MWIR) and longwave infrared (LWIR). A ‘thermal camera’ utilizes a sensor that is either MWIR or LWIR, and detects thermal energy emitted from targets. Simply stated, they see heat instead of light. This is a significant advantage over image intensified systems, because I2 devices need at least some amount of ambient light for the devices to work. A thermal imaging camera needs zero visible light to operate, as it detects thermal energy emitted from the targets in the scene.

When thermal cameras were first developed and used for military applications, the sensors were primarily LWIR or MWIR sensors that needed to be cryogenically cooled to liquid nitrogen temperatures (77K or -200C). They needed to maintain this temperature to increase the thermal sensitivity and reduce noise. Initially, the systems had either single or multi-element sensors or linear arrays, and there were mechanical scanning devices used to “paint” a complete thermal image. This resulted in a very complex device that was quite large and bulky, had frequent maintenance issues and was very costly. The technology at that time made it impossible to have a truly portable system.

These images were taken using thermal imaging devices. (Image courtesy of FLIR)

The Development of Practical, Portable MWIR Sensors

The technology evolved from scanning-based systems to staring systems with the advent of two-dimensional focal plane array systems in the early 1990s. Initially, these focal plane arrays were relatively lower resolution (128x128 or 256x256 pixels) and each pixel was quite large, measuring 50μm or greater (For comparison, the pixel size on a typical consumer visible-light CMOS sensor is 1-3μm.). The sensors were initially MWIR and made of materials such as Platinum Silicide (PtSi) or Indium Antimonide (InSb.) They had to be cooled to the same 77K, but advanced mechanical coolers were developed. It was a step in the right direction which created somewhat more portable systems, but still far from the ideal.

The Need for LWIR Technological Breakthrough in the 1990s

MWIR is well suited for many applications, but not ideal for all. LWIR is preferred for many terrestrial military applications. Due to the larger LWIR flux at colder temperatures, there is a clearer path through the atmosphere in LWIR. Most importantly, LWIR can penetrate smoke, dust and battlefield obscurants better than MWIR. However, in the 1990s there was no practical LWIR solution as there were no practical two-dimensional LWIR sensors. As a result, the military was limited to scanned devices for LWIR.

Stay tuned for the next article in this three-part series to learn how microbolometers work and more. (Image courtesy of FLIR)

In the late 1990s, microbolometer technology began to solve the problem of LWIR impracticality in military applications. We’ll explain how advances in microbolometer technology have allowed wider adoption of LWIR sensors in more warfighter applications in our next article, coming soon.

For more information about infrared imaging and sensors, check out FLIR Systems.


FLIR Systems has sponsored this post.