When Disruption Comes to Simulation, Will GPUs Take Victory?

I don’t often have to pick my jaw up off the floor after a software demonstration. Engineering.com sees a lot of demos; many impress. But most show incremental improvements on common themes or established technology—nothing revolutionary. But the time I spent with John Thomas, president of M-Star Simulations, made me think about the long-term impact, on the CAE industry, that his company’s GPU-based CFD tool could represent.

The last time I sensed a revolution was brewing was years ago, when Ansys demonstrated Discovery. For the first time, I saw lightning-fast, GPU-based 3D simulations. Though there were some bugs to work out, mainly in the areas of accuracy, it was clear to me that the future of simulation would be GPUs.

[Full disclosure: I am a previous employee of Ansys and retain a small number of shares.]


A porous media simulation produced using Ansys Discovery.


However, not much revolution has come to the Simulation space since then. Sure, there have been considerable improvements and occasional big news. For instance, from our previous coverage, more companies are:

Additionally, new features and physics have been added to Discovery. Mark Hindsbo, vice president and general manager of the Ansys Design Business Unit said, “We continue to innovate and expand the functionality of Ansys Discovery, which has advanced significantly compared to the initial version launched in 2018. However, the focus has and will continue to be on ease of use and solving general engineering use cases for a broad population of product designers, not on adding features that might only be usable to a small set of specific experts.”

He added, “Ansys Discovery embeds both the native GPU Ansys Live solver technology as well as more classic CPU based solvers that power our high-end tools, such as Ansys Mechanical and Ansys Fluent. At the same time the GPU-based Live solver is getting more and more accurate, and more and more feature rich. So many engineers both can and will take their workflows all the way from early-stage guidance to later stage validation—and given our tools share the same solver technologies it allows the generalist to work easily with the specialist for more complex higher end analysis.”

Despite this effort to reach the general engineering use case, there haven’t been any tectonic shifts and GPU isn’t an industry standard. It seems that without the accuracy or—as Hindsbo suggested above—without the perceived accuracy of traditional CPU-based software, GPU-based CAE was not taking off with the speed I expected. But when I saw M-Star I started to believe, once again, in a future dominated by GPU-based CAE.

What are the Benefits of GPU-based Simulation?

Speed and ease of use tend to be the two biggest benefits of GPU-based CAE tools. The idea is that by getting near-instant results for each run, an engineer is quickly able to:

  1. Explore the design space with innovative or even out-of-left-field ideas.
  2. Narrow down the development options to optimize a design.
  3. Play around with the software and master it.

Hindsbo said, “Ansys Discovery is designed with ease of use as a key tenet, and you can become very proficient in its use in days if not hours. [It] is designed to be highly robust, automated and usable by engineers who are not necessarily simulation specialists.”

Thomas goes a step further, “Legacy CFD tools are very important, they cut the path through the 80s, 90s and early aughts. But they are tools you need to be extensively trained on and focus your career on operating. M-Star is a tool that you can pick up in a few hours and it can generate physics with a fidelity that exceeds what is typically solved using steady state solvers like COMSOL, OpenFOAM or whatnot. It’s a perfect fit for people who want to augment their engineering practice with simulation, but not define their career as just a simulator.”

A simulation, made using M-Star, that depicts an HVAC system’s response to a circulation flow rate that is managed by a PID controller.

As for how the industry is attempting to democratize traditional CAE tools, such as Ansys Fluent, Jeremy McCaslin, manager of Fluids Product Management at Ansys, has some insights. He said, “Ansys Fluent earned its status as the industry-leading fluid simulation software for its advanced physics modeling capabilities and accuracy. Over the past few years, Ansys Fluent has focused on improving ease of use by creating streamlined workflows embedded with best practices for both meshing and physics setups. These improvements, along with the training courses available on the Ansys Learning Hub, have made Fluent more intuitive than ever.”

At this point you might be wondering, ‘what about the cloud?’ Isn’t the cloud supposed to be the solution to the computational resource problem in CAE?’ Well, the truth is that we’re really talking about the same thing. As Thomas said, “Typically, a multiple GPU resource is a cloud-based resource.” A CAE tool that operates optimally in GPU will also operate optimally with many cloud resources. The benefit specifically here is that for cloud to operate, the tool needs an internet connection—whereas GPU-based CAE software can operate on a gaming computer from a big-box electronics store.

To summarize, GPU-based simulation is easy to pick up and fast at iterating designs. All it needs to operate is a local workstation with a nice GPU. Traditional CAE tools provide high accuracy but at a much slower computational rate.

As McCaslin put it, “Engineering simulation is very much about finding the sweet spot in the speed versus accuracy trade-off to meet a given need. For a designer, speed is imperative—and that’s where Discovery sits. But as the hardware and software landscape evolves, Ansys’ focus is on delivering both speed and accuracy. Where we expose a given solution has and will depend on who our users are and what their needs are.”

What’s interesting here is that M-Star is claiming to offer speed and accuracy at the same time. So, what’s the catch? Where is the trade-off McCaslin speaks of?

What Sets M-Star Apart from Other Simulation Tools

For a more in-depth dive into what M-Star is, click here. But suffice it to say, M-Star is a Lattice Boltzmann, GPU-based CAE tool that offers internal CFD simulation at accuracies on par with Ansys Fluent (according to these studies) at the speeds of Ansys Discovery. So, why wouldn’t M-Star be the best choice for all CFD situations? Well, the answer is that it’s tailored to specific niche audiences.

“The trajectory of M-Star is informed entirely by clients’ need and demand, not our competitors’ productions or action,” Thomas said. He further explained that many of his clients are in the pharmaceutical space, which he said is “an industry underserved by current vendors.” As a result, M-Star focuses on internal CFD models that would serve that industry, such as dissolution. It has caught the eye of other industries that would be interested in internal fluid flow, like oil and gas.

But at the end of the day, Thomas says that the goal isn’t to make M-Star a general simulation tool that is everything to everyone. The goal is to make strong inroads into these markets.

On the other hand, “Ansys Discovery and Ansys Fluent,” Hindsbo said, “use the finite volume method for CFD as this is more generalizable. Lattice Boltzmann is good for certain specialized or scientific simulation purposes but is too limiting for the broad set of applications to which our customers apply Discovery and Fluent.”

For Thomas, a focus on Lattice Boltzmann was part of the shine behind M-Star. “Finite volume code, which has these big tridiagonal banded matrices, is inherently branchy. While some of it can be bent to fit on a GPU code, and some aspects of it may port over, the entire code will never live natively and entirely on the GPU,” he said. “M-Star, which uses Lattice Boltzmann as its core, is trivially parallelizable. It doesn’t even need to be massaged to run on the GPU; it is native to the GPU. It’s not that they don’t recognize the speed, it’s because the nature of the algorithms used in that code limits its portability to more modern architectures.”

“Because we’re using a more powerful computer architecture, we can use better foundational transport algorithms,” Thomas added. “Out of the gate, we’re using LES and particle/bubble tracking, so this isn’t about how quickly I can minimize residuals. We’re using an approach that captures a spectrum of physics that doesn’t exist in RANS-based tools.”

In summary, it appears that the trade-off for M-Star to be able to offer accuracy and speed is that it is tightly focused on niche applications.

Is M-Star Going to Disrupt the Simulation Space?

During Thomas’ demo, within the blink of an eye he went from simulating a mixing vat to simulating a mixing vat with 500,000 particles. Those results, appearing in near real-time, made it easy to imagine an industry disruptor in the simulation space that is based on GPU technology.

A fully coupled transient CFD-DEM simulation which shows particle dissolution within an agitated tank using M-Star CFD.

We use “disruptor” sparingly at engineering.com. Though many marketers like to use that description, or similar, it’s rarely true. Thomas himself did not use the phrase (which if anything is a point to his side). Instead, he let the demo speak for itself. Though I think GPU-based simulation has a chance at disrupting the market (more on that later), is M-Star the software to do it? Let’s take a step back. I’ve learned to face potentially disruptive technology with logic and without emotion. Let’s take that approach here.

On one hand, GPUs are a big part of the future of computation. As evidence, see TOP 500’s list (as of writing) of the most powerful commercially available computer systems. As Thomas puts it, “That’s the real sizzle: GPU-based scientific computing. And this isn’t a flash in the pan; the prevailing trends, whether it be the top 100 HPC or the local desktop computer resources, fully exploit GPU computing, and we’re riding that trend pretty hard with a good amount of success.”

Some more interesting evidence is that M-Star is growing at a fast clip, according to Thomas. “It’s taken off,” he said. “In the last few years, we’ve expanded offices across the U.S. and Europe. We’re getting ready to set up a support center in India and are contracting in the Far East. We license code on an annual term which means every year users choose to renew or not. And in five years of licensing the software, we’ve never had anyone not renew. In fact, most come back with more users to train. That’s a very encouraging sign.”

On the other hand, M-Star is a niche, stand-alone simulation tool for internal CFD simulations. I can see it dominating that niche market in the years to come. But niche simulation solutions cannot apply themselves to the full gamut of what engineers use CAE software for—such as assessing the full spectrum of physics, interoperability with other engineering software and PLM, to name a few.

M-Star is independent. The company is working on linking its software to others in the engineering toolbox, but this is still early days. Currently, users are limited to APIs and home-grown options. Until its abilities span the physics, multiphysics and other capabilities of an all-purpose simulation software, M- Star isn’t going to disrupt enterprise players.

“As we mature and grow from our initial silo into more of an enterprise tool, there’s been increased interest in M-Star integrating with other software,” said Thomas. “So yes, we’re interested in doing this and are integrating some of those things now. But engineers are becoming increasingly specialized and tasked with modeling increasingly complex processes. This trend is driving end-users away from one-size-fits all modeling packages and towards software tailored to their application space.  For most users, a streamlined modeling approach that solves a specific set of problems phenomenally well provides more utility than a cluttered framework that ‘seamlessly’ connects mediocre predictions in one area with another.”

Hindsbo strongly disagreed with Thomas’ thoughts on a trend away from multipurpose simulation software. He said, “The simulation market is a solidly growing and innovating market. There have been cycles of both fragmentation with new startups entering, as well as consolidations where Ansys has played a significant role as an acquirer. As a result, we continue to see the general-purpose software get richer and broader, solving many more use cases out of the box than just a decade ago. At the same time, the constant innovation both by our customers and by the simulation industry continues to create new opportunities both large and niche. So, we would expect the industry, as a whole, to continue producing new interesting startups and Ansys will continue to both organically innovate and acquire.”

He added, “We don’t see the industry fracturing into niche solutions any more or less than it is today. New innovative solutions will continue coming to market as we and our customers innovate. At the same time, the general-purpose software continues to be more and more capable, and our customers want to understand multiple and more complex aspects of their customers, not just one niche aspect.”

I would happen to agree, in some respects, with both specialists here. It is safe to say that there will always be a place for general purpose simulation software, and there will always be those that are aiming to dig deep into complex multiphysics problems. But there is also plenty of evidence that there are multiple markets for niche simulation tools and apps. I would even argue that these apps might be a better path towards democratized simulation.

As for M-Star, I don’t believe it is a disruptor; I also don’t believe it wants to be. But should that be a salve for the CAE industry? No, it should be a warning.

What Does All This Evidence Point Towards?

“If a tool like Ansys could fully exploit GPUs, they would be running entirely on GPUs at this point,” Thomas said. “The top HPC are all GPU based; this is the megatrend: massively paralleled algorithms running on GPUs. Some aspects of Simcenter STAR-CCM+, COMSOL or similar may run on GPUs. But the majority of the code never will. As a consequence, there hasn’t been any complete portable versions of those tools to GPU computing, despite the ship going towards GPU computing from the top down.”

Hindsbo disagrees. “It is both possible and already done. The Live solver utilizes finite volume method for its CFD simulations that scales massively, and has proven to be both as fast and scalable as Lattice Boltzmann for GPU, without the significant limitations of Lattice Boltzmann in terms of applications.”

Regardless of who is correct at the end of the day, Thomas has a point. Much of the industry sees Discovery as a divining rod towards the most optimal designs, which is one reason why it hasn’t disrupted the market. And currently, other than M-Star (which isn’t broad enough) there isn’t much else in the way of GPU simulation technology.

However, the existence of these two software options suggests that it is conceivable to imagine a highly accurate, multiphysics, general purpose simulation software running lightning-fast on GPU. Perhaps Discovery and M-Star are just the first steps to this?

Again, Hindsbo offers a different view. “There is no doubt that GPU compute is an interesting new compute model option that in some instances brings order of magnitude or more advances for simulations. However, there are certain algorithms that still run better on classic CPU architectures. Longer term we see them more as complementary than exclusive, and we will probably have a hybrid world in simulation.”

McCaslin agrees with Hindsbo, but digs a bit deeper into the potential limitations of GPU technology. “The massively parallel nature of GPUs is more readily exploited by problems where the operations-to-data ratio is high, meaning that they are more computationally intensive rather than data intensive. For example, particle-based methods like Lattice Boltzmann and smoothed particle hydrodynamics fit that description well, yielding data-parallel computations that are well-suited for GPUs. This explains (at least in part) why the shift toward GPUs within scientific computing has been very gradual and non-uniform. For other physical descriptions that don’t naturally fit that description (e.g. the continuum fluids equations or Maxwell’s equations), we are only just beginning to see computations that fully utilize GPUs.”

I’m no computer engineer, so I don’t profess to know who is right when it comes to the abilities and limitations of GPU technologies. I can’t say if it’s possible for GPU to become the backbone of general-purpose, multiphysics simulation computations throughout the industry.

What I do know is that if someone cracks the code and comes into the market with the first general-purpose, highly accurate GPU-based CAE tool—be it one of the big players, a small company or anyone in stealth-mode—it will have a clear advantage. I suspect that someone is working on this problem, and they might be just a ‘eureka moment’ away from figuring it all out. In other words, simulation market leaders are free to dismiss M-Star or Discovery as ‘not all there yet,’ or even ‘impossible to expand further’—but at their own peril.

Don’t believe me? Based on his interactions with his clients, deep down I think Thomas agrees with me. He said, “When we go on site and people start using M-Star, I’ve heard many tell me, ‘Wow, I’m never using RANS CFD again.’”

Though it’s true that in this case Thomas is talking about users of a niche CFD tool, why would the story be any different for a hypothetical, general purpose, GPU-based simulation tool?

In short, if M-Star or Discovery isn’t the GPU tool that will disrupt the leaders in the simulation space, their existence should at least be a call to arms. As Thomas said, “At the end of the day, I think most businesses don’t get beat, they commit suicide by failing to innovate.”