Game Over – Humans No Match for AI, But What About Engineering?

Can artificial intelligence help us with simulation?

As humans concede games to AI, the more substantial challenge of engineering awaits–and it is no game. Can AI be used to solve serious real-world problems like simulation? With AI a buzzword, engineering software vendors are raring to get AI into their products, but is it at that level of capability?

We’re not playing games here. AI has bested humans in strategy games, even the very complicated game of Go. But simulation, such as this large-scale deflection of an automobile, is so much more complicated. Will AI be of any help? (Pictures courtesy of Wikipedia.)

Artificial intelligence, or AI, was the subject of a freewheeling discussion at the annual Congress on the Future of Engineering Software (COFES). A room crowded with software intelligentsia was roiling with thoughts about AI, debating its applications to simulation and their thoughts about the state of the art in AI and machine learning.

[No AI was used to write this article. Following are nuggets from the session and riffs off them. Apologies for lack of proper attribution to those quoted or paraphrased. I couldn’t get all the names of people in the room so I’ve used none. I’m only human. -Ed.]

Low-Hanging Fruit Picked

The first question we should ask before we do an analysis is whether an analysis is needed at all. Then, and only then, can we see if AI can be applied to it.

In its current state, AI may be good at some things but not others. It has been successful in determining who is low-risk when banks give loans. But can it do something as complicated as analysis? The consensus was that AI could not. What would be the architecture of such software?

“AI is a function that takes one set of data and makes another set of data.” If you want to take a credit card transaction and determine if it is fraudulent, that is a very clean-cut problem. Facial recognition too.

And so it goes… a sentiment that the problems solved by AI so far were the easy ones, leaving the unsolvable behind. Like simulation.

The easy problems involve simple pattern recognition. The number of variables in problems were on the order of 10. But solving a stress analysis–the number of variables can be the degrees of freedom in a mesh. That’s millions of variables!

Even if the analysis is done, now what? We still need to make changes to a design based on the result. This is also where AI should step in. But again, the question remains… can it?

Design Space Exploration

With shape optimization, adversarial forces clash–the load side is trying to break the system while the structure is trying to save it.

Figure 1. Is this AI? General Motors used Autodesk generative design for this light weight alternative assembly. (Picture courtesy of General Motors.)

Though it’s been argued that algorithms that create all manner of shapes to satisfy a given criteria (generative design) or whittle away at blocks until the “optimum” shape is reached (topology optimization) are examples of AI, they work with rules supplied by humans rather than making their own, so they don’t exactly fit the definition supplied previously.

What is machine learning (ML)? You have inputs, rules, then outputs. With ML, you give the software the input, give it the output, and the software comes up with the rules.

This can become a “garbage in and garbage out” situation. A researcher tells of a year-long effort with AI that produced little result, with the software able to generate positive results in only the most trivial case. The problem may have been giving the software bad data, the researcher surmises. The research project led to manually manipulating the data in order to get good results. Kind of like spoon-feeding a baby.

Letting Go

When real data proves to be too much, we return to games. It may be all the games people have invented they now have to surrender to computers. From tic-tac-toe, AI hopped over checkers and checkmated chess, all to reach its current height with Go, the 2500-year-old Chinese game said to be the most complicated moving piece game in the world.

AlphaGo was not a computer but a software program by DeepMind, Google’s AI subsidiary. A hundred AI developers and data scientists banded together in 2016 to defeat the world’s best Go player, a 33-year-old Korean, Lee Sedol, whose legendary prowess at Go, his superiority over other players, made him like a LeBron, Tiger or Lance in their own fields. AlphaGo had already practiced on the European champ, destroying him in five straight games.

Lee Sedol may have thought it would be an easy million bucks, the prize Google dangled to draw out victims, but he’d have thought wrong. AlphaGo won the first three games, winning the series. Sedol won the fourth, saving face for Team Humans.

Go, played with black and white stones, is simple enough to learn but the number of possible moves (2.08 × 10^170) was thought to be beyond the reach of computing. But it was the Alpha system of computers, stuffed with lessons of every Go game ever recorded, that was able to trounce human Go masters. But as complicated as moving piece games may be, they are still based on a small set of rules. Randomness makes no entry. Chess is modeled after real warfare, but its rules can only portray a limited version of all that can happen in a battlefield. While the total number of possibilities can quickly overwhelm mortals, a computer with enough speed and memory would be more than a match.

A real battlefield, in comparison to these games, offers so much variation that even a super computer could only hope to model a single foot soldier–if that.

Still, the public–and even tech intelligentsia–considers mastery of games as credible proof of the superiority of AI over HI (human intelligence) and suggests an inevitable placement in every aspect of our working and non-working life. It’s easy to forget that it took years before AI was able to tell a cat from a cactus, never mind one cat from another, or what constitutes a human face (something a baby can do at a few months old).

AI researchers busy themselves with cats because they have lots to work with–vast image banks. Cats are some of the most photographed things on Facebook. But the data is 2D. If that is all that AI can manage, then humans afraid of losing their jobs or a Skynet apocalypse can sleep well. Engineering data is composed of much more complex data. Not only is it 3D but it changes with time. AI would then have to deal with a dynamic data set, a live performance, a streaming video feed–not a series of photos. There may be a fairly large dataset for gripping shapes, too, as robot manipulation is a much-studied problem. However, there is no large dataset of FEA models solved for stress, for example. So little for machine learning systems to feed on, to grow and learn from.

Will a digital twin fix that? One person may interpret the data streaming off real-life parts and sensors as fulfilling the need for digital data needed for simulation. However, while working designs (digital twins) could make data lakes, the lakes are walled off, private. No company wants to share that data. What is needed is a data sea, where all can ply the waters, taking what they need.

Now, who is going to set that up?