Computers Use Machine Learning to Detect Radiation Damage Better Than Humans Do

Developing safe nuclear reactor materials depends on a critical, though tedious and time-consuming, task: sifting through electron microscopy images of materials exposed to radiation to identify radioactive damage. This monotonous task has traditionally fallen to image-processing algorithms programmed to identify patterns in images that look like Jackson Pollock paintings.

Researchers at the University of Wisconsin-Madison and Oak Ridge National Laboratory may have found a faster and more accurate alternative: letting computers learn how to identify the damage by themselves.

“Human detection and identification is error-prone, inconsistent and inefficient,” said Dane Morgan, materials science and engineering professor. “Newer imaging technologies are outstripping human capabilities to analyze the data we can produce.”

Conventional image processing algorithms rely on their human programmers to define an object. For example, defining a cat might involve coding that describes a long tail, four paws and whiskers. It gets more complex when showing the computer how to differentiate between a dog’s tail, or a raccoon’s whiskers, and a cat’s.

Machine learning takes a different approach. It allows the computer to learn by itself what a cat looks like by using a program called a neural network, which mimics the human brain’s pattern recognition abilities. To teach a neural network to recognize a cat, scientists provide the computer with a collection of accurately labeled cat pictures. The neural network uses those pictures to build and refine its own model of what is, or isn’t, a cat.

Google explains machine learning

The researchers taught a neural network to recognize dislocation loops, which are a specific type of radiation damage difficult for humans to identify and quantify. They did so by training the neural network with 270 images. It then used another machine learning algorithm, called a cascade object detector, to analyze pictures of radiation damage.

The computer identified 86 percent of the dislocation loops. Human experts only found 80 percent of the loops.

Not only was the computer more accurate, but it also did the job much faster.

“We can now detect these loops like humans while doing it in a fraction of the time on a standard home computer,” said Kevin Field, Oak Ridge scientist.

Morgan and Field are working to expand on their success by teaching a new neural network to recognize different kinds of radiation defects. Eventually, they envision creating a massive cloud-based resource for materials scientists around the world to upload images for near-instantaneous analysis.

“This is just the beginning,” Morgan said. “Machine learning tools will help create a cyber infrastructure that scientists can utilize in ways we are just beginning to understand.”

Learn more about uses for machine learning in engineering at Computer Program Uses Salad-Making Videos to Learn to Predict the Future.