Advances in Machine Vision Enable Automation of Quality Inspections

Top left: Anant Patel, senior product manager-technical, Amazon Web Services. Top right: Dan Pipe-Mazo, CTO, Elementary Robotics. Bottom: Dat Do, head of machine learning, Elementary Robotics. (Images courtesy of Association for Advancing Automation.)

During the Association for Advancing Automation (A3) Vision Week in June, experts from Amazon Web Services and Elementary Robotics weighed in on the traditional challenges organizations face when using machine vision. They discussed how to incorporate the latest advances—including the cloud—to make the process easier, faster and able to solve seemingly unsolvable quality inspections.

Difficult quality inspections across industries have traditionally relied on manual inspections. While it may be easy to put a person at the end of a production line, humans are inherently subjective and prone to error. Machine vision has proven itself as a valuable tool to address those issues while also lowering the costs of inspection.

“The industry has shifted to machine learning for some challenging problems,” said Dan Pipe-Mazo, CTO at Elementary Robotics. “We might have inconsistent product, but it’s hard to quantify or qualify rules. We might have examples when you train a system but it has limited defects. Assuming we have a challenging product, we have a challenging configuration. With machine learning, it is no longer rules-based configuration.”

Unfortunately, machine vision is usually purpose built and only works particularly well for certain use cases. If a different defect type or production line is brought in, the entire system requires reprogramming or recalibration, thus increasing up-front costs and limiting scalability. For example, an automotive customer has inspection points and processes from stamping to welding to painting and overall end inspection. Each station often has its own QA process with different defects and approaches.

For businesses seeking to incorporate machine vision, the challenges of having these different inspection points require asking how to build a system—and make it one that can scale across the different inspection types while tracking costs. This often boils down to three key challenges with machine vision: configuring, running and maintaining models.

Challenges with Machine Vision

Traditionally, thousands of images are required to enable machine learning to find defects. This often involves engineers going on site to take images and upload them back so they can run. Since the images are often hard to produce, this process takes significant time. It requires setting up equipment at the right angles and with the proper lighting.

Monitoring and maintenance often have complexities that mean reconfiguring when nonstandard or difficult variations are the problem. If the product looks like the initial images, it will typically be fine. Yet, any slight variances such as light, color tint, a bumped camera or other small factor can make the model no longer perform like in its original training. To make those quick, corrective actions, it again requires someone on site.

Recent advances have helped to solve the challenge of configuring, maintaining and monitoring for optimal performance.

“We can have an IoT-connected cloud-based machine learning platform,” said Pipe-Mazo. “We are leveraging all those technologies to mitigate these challenges. Also on configurating, by leveraging the IoT cloud, we no longer need to make a trip to the camera to configure and set up. We can do that all remotely. With cloud-based, you can be constantly ingesting and monitoring data to take quick action.”

Advanced Solutions

Amazon Web Services (AWS) launched its Amazon Lookout for Vision in February with the goal of making scalability easier.

“We know that historically, machine learning takes hundreds if not thousands of images to identify defects at the right scale,” said Anant Patel, senior product manager-technical at AWS. “Our lower bar is only 30 images. It’s a great way to get started and just see how they are working and if they need more images from there. Running—if using a third party, you have to buy purposeful cameras up front that you have to calibrate. Maintaining—environmental conditions are different and change. Being able to maintain and improve them over time is critical to long-term success and reducing operational costs.”

Amazon Lookout for Vision is an easy-to-use, cohesive service that analyzes images by utilizing computer vision and machine learning to detect defects and anomalies in manufactured products. With as few as 30 images, customers are able to quickly spot manufacturing and production defects, and prevent costly errors from moving down the line.

Amazon Lookout for Vision enables customers to create, run and maintain a machine vision inspection platform with ease and minimal up-front costs. (Image courtesy of Amazon Web Services.)

According to Patel, key benefits of Amazon Lookout for Vision include speed of deployment, along with the ability to handle diverse conditions and incorporate different use cases.

“By allowing the same foundational science to be used across different use cases, you are training a custom model based on the set of images you bring in. You can then configure product type and defect for each specific use case,” Patel said. “It isn’t going to solve everything, but once that anomaly is flagged: How do we improve decisions for an operator? How do we improve for kicking off that defective product so it never gets to the end user? Do I need to rework? Do I need to scrap?”

Leveraging technological advances is enabling Elementary Robotics to solve problems that were once unsolvable, such as not being able to detect a slight color variance.

“Using a color filter, we can set the boundaries just right, so [for example] it’s not picking up any granola but picking up a slight piece of debris,” said Dat Do, head of machine learning at Elementary Robotics. “If we apply those settings to brown, it’s not able to pick up because there is not sufficient contrast. When we look at a learning-based detection method, it finds it quite effectively even though there is not much contrast. The reason is that we can key in on shape and texture in addition to color.”

Along with requiring fewer images, the power and computational capacity in the cloud allows for training only good images. That dataset of good images is mapped in the neural network, which learns that space. When there is a bad image, the neural network will put it far away, making it easier to determine if it is good or bad.

Use Cases

While surface issues such as scratches and holes can be seen by the human eye, shape, missing components or process issues can easily go undetected.

One of Patel’s use-case scenarios included helping GE Healthcare with scalability in process control. Although the company builds CT and MRI machines on a small scale, the inspections must be of the highest quality. Different objects are placed on a machine, scans are run, and then analysis is carried out with up to 3,000 images per screen. Traditionally, an individual would sit and review all 3,000 images and verify that there were no defects. By automating the process, the operator—a subject-matter expert—can focus on specific defects that are identified. If what is identified is not a defect, the system can be retrained, boosting confidence that it will catch all future defects.

“New advancements like self-supervised training allow [us] to initialize our neural network weights to a good place,” Do said. “We have a neural network with millions of parameters, so it needs lots of data to see where to set network weights. We can create a pretext—remove color from images, feed black and white, and train to reproduce color images. We can take those weights and train on an actual test. In this case, we only used one image per class and five images in total. We were able to achieve an accuracy of 99.87 percent on an entire dataset of 700 images.”

Conclusion

From finding a small particle on a grain to detecting a bottleneck in a production line, machine vision has come a long way. For the innovators behind the scenes, the ultimate goal is to make incorporating these advances easier and seamless.

“Our intention is to make it simple to use, for anyone from nontechnical to machine learning expert,” Patel said. “It is a simple process, and it is a fully managed service so everything can be done in the console. If you have existing images, you can use them. Once you bring them in, we offer the ability to label [an item] as normal or anomaly directly in the console. You can train models and get evaluation results, which allow you to determine if you have enough data.”

Incorporating the cloud means putting a dashboard in anyone’s hands without the added time of going on site, as well as giving subject-matter experts back time to annotate anomalies and lend their expertise to further increase flexibility and scalability for quality inspections. Ease of use has also been greatly increased, making the next era of machine vision more of a reality, no matter the size or scope of business.

To learn more about machine vision and its uses, check out The Present and Future of Machine Vision and Imaging and How Machine Vision Applications Are Advancing AI in Medical Imaging.