The Present and Future of Machine Vision and Imaging

(Image courtesy of Tech Crunch.)

Historically, the manufacturing industry’s approach to automation has been to look for a few key tasks that are good, high-value candidates for automation. The ideal task is dull, dirty and dangerous, and involves a repetitive, repeatable movement. For example, Unimate transported automotive die castings and welded them onto car bodies—applications that have continued to dominate traditional industrial robotic automation.

However, the progression of industrial robots in manufacturing has followed a similar path to many other technologies, such as machine vision and imaging. As the costs come down, the technology finds its way into smaller, more niche applications. For example, collaborative robots have opened up new robotics applications, usually by being more user-friendly to program, less expensive and requiring less footprint in the factory. The same effect is true of machine vision and imaging solutions for industrial automation.

At the recent A3 Vision Week virtual conference, engineering.com attended several exciting sessions concerning the trends in the machine vision and imaging market.

How Is Machine Vision Used Today?

At a basic level, vision allows a robot, or a human worker, to interact with the world in real-time. For Eric Danziger, founder and CEO of Invisible AI, this simple fact is something that helps his team identify possible candidate tasks for automation in a manufacturing setting.

“The key thing is to look for an area where vision, even by people, is a key part. If you can use the vision system to enable the machine to understand what is really happening in the task, you can add even more value,” Danziger said. “We ask, ‘where can we find overworked people who use vision a lot, and add value?’”

According to Kimberly Matsinger, product marketing manager at Basler AG, a typical vision system in use today with an arm robot has a system flow—from input to perception to decision and finally action.

(Image courtesy of A3’s Vision Week.)

Input: The input of a robot vision system refers to the object to be picked or manipulated. Considerations include how the object is presented to the camera, whether it is stacked, sorted, known or unknown, etc.

Perception: This involves considerations such as camera type, resolution and lighting.

Decision: This involves the processing of the image, including identification of the object, gripping coordination, measurement, pattern matching, optical character recognition (OCR), or code reading.

Action: The robot executes a move to pick, place, sort, assemble or record the data.

For mobile robots, which rely heavily on vision systems, this system flow is slightly different. The input for a mobile robot includes both the item to be manipulated and the environment to be navigated. The decision, or processing, may not be completed on board the robot, which instead leverages cloud-based AI. Mobile robots must also add movement and transportation to the list of actions performed by a typical arm robot.

For arm robots, cameras may be mounted on or off the arm. In most cases, off-arm is preferable to on-arm, unless necessary for the specific case. Off-arm systems have the advantages of more consistent lighting and simpler cabling. On-arm systems must calibrate the robot with the camera as a secondary payload and select a cable that can last for the appropriate bending cycles required.

2D vision is more widely used than 3D vision. While 3D vision can be more data and computing-intensive, it also adds depth sensing that can improve performance in some tasks. The most widely used methods of 3D vision are time-of-flight sensing, structured light, and active or passive stereoscopic vision.

Today’s vision system includes a robot, lighting, camera (including 2D or 3D) and controller, which is typically housed locally and includes the HMI, PLC, robot control software and vision control software.

According to Matsinger, the most likely shift we will see to this technology stack in the near future will be the offloading of much of the processing to the cloud or edge. As resolution and data gathering improve, more and more computing power is needed to quickly process it—especially as machine learning is applied to solutions.

Industry Trends in Machine Vision and Imaging

This trend toward cloud computing was echoed by Danziger, who noted that the most significant bottleneck to performance his company currently faces in designing machine vision systems is computing power.

“‘Compute’ is the biggest bottleneck for us. When you are ‘compute-constrained,’ it doesn’t matter what your optics are,” he explained. “As your computing gets more powerful, you’re able to take advantage of better optics.”

This idea was echoed by Greg Hollows, vice president of Imaging at Edmund Optics. In the camera and imaging industry, the smartphone industry has been a massive consumer and driver of camera technology. Indeed, many industrial imaging solutions use smartphone cameras. According to Hollows, these solutions are a great entry point into working with machine vision and open the door to higher-quality optics and camera systems when users outgrow them.

Darcy Bachert, CEO of Prolucid, agreed. “Optics are fundamental—that’s where it all starts,” he said. “We’re seeing advances in optics driven by the convergence between cellphone technology and other factors. Computing is getting faster, and libraries and machine learning models are also enabling new things. The overall cost of data storage is coming down as well.”

Moving back toward manufacturing, the key trend the experts agreed on was the boom in e-commerce brought on by the pandemic, as well as key e-commerce players, such as Amazon. Manufacturers and logistics providers are more time-constrained than ever, even for high-mix and customized products.

Outside manufacturing, many of the experts, including Hollows, Danziger and Bachert, agreed that medical imaging and computer-aided diagnostics are a growing area for imaging and machine vision. Specifically, point-of-care devices and lab-on-a-chip opportunities require advanced cameras and optics to achieve the resolution required, and machine learning algorithms are already in use in diagnostic imaging, at least in the research lab.

Hollows pointed out that while consumers may have been wary of automation and cameras in the past, e-commerce, the pandemic and solutions like AR for home shopping may lead to greater adoption of automation in our lives as a society. Automation solutions that require vision, such as last-mile delivery robots or autonomous vehicles, may have received a boost in terms of public image thanks to the pandemic.

Speaking of autonomous vehicles, the experts predicted big things for this fast-moving technology. As A3 moderator Alex Shikany pointed out, the number of motors was once a selling point for cutting-edge vehicles. One day soon, it may be the number of cameras. Hollows added, “If self-driving cars take off, that will be the third biggest camera application behind smartphones and security.”

Donato Montanari, vice president of Machine Vision at Zebra Technologies, highlighted the trend toward flexibility in hardware and automation solutions in general. Software solutions and cameras are becoming more flexible, and solutions are available that, for example, combine the capabilities of a barcode scanner and 2D machine vision camera, enabling engineers to stick with one platform, one dashboard and one camera hardware product to handle multiple tasks, such as inspection, pick and place, barcode reading and OCR.

What’s in Store for the Future of Machine Vision?

In addition to Matsinger’s prediction about the importance of cloud and edge processing for complex computing to enable more powerful vision solutions, the other experts, including Hollows, Danziger and Bachert, had fascinating predictions for the future of this industry.

Danziger hinted at advances toward quantum computing and similar advances in computing itself as one future tech that would open doors for new and exciting machine vision applications.

“We need some sort of phase change over and above the digital components of today,” said Danziger. “Much more powerful technology that uses less power than we use now.”

In addition, the experts are excited about how the growth of autonomous vehicles will drive innovation in machine vision.

Hollows noted a growing interest in wide-spectrum, hyperspectral and even ultraspectral—very narrow spectral band—imaging. For example, the vision systems of the future may capture not only visible light but also IR and UV, processing even more visual information than humans are capable of. There are still processing challenges with these technologies, but they open opportunities for new vision applications.

Bachert echoed this, noting certain non-destructive testing applications in use today that use ultrasound imaging, for example.

“We need new expertise in this industry to develop these new, exciting technologies,” said Bachert.

Along with other spectra beyond visible light, Danziger noted other modalities of light that can create opportunities for vision systems, such as detecting the polarity of light.

“Most vision systems are looking for A or B,” Danziger said. “For example, if all automotive windshields are polarized, detecting polarity could serve as a good backup system for an autonomous vehicle.”

It’s an exciting time in the field of machine vision and imaging, driven largely by the demands on manufacturing as e-commerce continues its march toward being the primary and dominant way consumers buy products. In addition, industrial automation continues to become more flexible and ready for deployment in increasingly niche or redeployable applications, which demand flexible and easy-to-use vision solutions. Lastly, the autonomous vehicles industry is an exciting source of innovation for machine vision as that entire endeavor largely hinges on the efficacy of cameras and the computing power, hardware, software and AI systems that enable them to not just see but also understand the world around us.