Dollars and Sense: Your Next Simulation Should Be On the Cloud

Your CAD/CAE software is not doing you any favors by providing a million-element model that your workstation cannot handle. Engineers have to spend time “defeaturing” the finite element model so that this doesn’t happen. Or do they? HPC may provide an alternative to defeaturing. (Image courtesy of ANSYS.)

[ANSYS has sponsored this post]

You might value the perfect simulation model, striving for perfection—an engineering ideal if ever there was one.

“Perfection is the enemy of the good,” grumbles your manager, misquoting Voltaire as he studies his group’s time sheets. Was all the time spent simplifying the model worth it? The manager knows that time is money, but do his engineers? They’re busy removing design details so they don’t tie up the computer. Defeaturing takes them hours, days…. Or they obsess over tiny cells in a fluid model where the flow is not critical, manually adjusting the mesh.

But all the time spent making the model fits, solves in time, and doesn’t blow up their workstations. All the time the clock is ticking. And time is money, after all.

Was It Worth It?

The manager is continually reminded of the amount of compute power available—especially on the cloud. It’s downright staggering. Expensive high performance computing (HPC) doesn’t have to be bought—there’s no budget hit. You merely rent the computers as needed. Maintaining them is someone else’s headache.

Why not let the cloud computers grind away at a problem—rather than your engineers? Use the CAD geometry as is, rather than simplify it? So what if every detail is meshed? It’s pennies per hour. So cheap you can let your engineers do more simulation. Go ahead and take time into account with transient and turbulent solutions, explore fatigue, go nonlinear, explore a dozen alternate designs.

Is It Comfortable Outside?

Engineers have grown comfortable with their workstations. Going outside, even if it means HPC, may be not as comfortable. It’s going to take some reassurance before engineers will let go of their trusted workstations.

But a fascination for top gear may do the trick. An HPC configuration may be the next best thing to a Cray supercomputer, the Ferrari compared to the sensible and practical Toyota. We can’t afford to buy you a Ferrari, but what if you could drive one whenever you want?

What Is HPC?

Basically, HPC is a network of computers that is optimized for calculations using industrial-strength microprocessors, CPUs and/or GPUs, parallel architecture, high-speed memory, and special network software. More details about what constitutes HPC can be found in our previous article High-Performance Computing 101.

This is what you imagine HPC looks like—and in the big leagues of simulation, it does. High Performance Computing Lab, George Washington University. (Image courtesy of Rackspace.)

HPC can be inside, or commonly called on premise in the business, or outside your company, most commonly now in the cloud. In a recent survey of over 600 engineers and their managers sponsored by ANSYS, about a hundred of them (17 percent) were using the cloud for engineering simulation. But an additional 20 percent were planning to do so over the next 12 months.

HPC, of Cores

HPC is good for simulation because of the number of cores that can be used for calculation. A core, or CPU core, is like the brain of a CPU. CPUs can have multiple cores. The ThinkPad X1 used to create this article has an Intel CPU with 4 cores. The Lenovo ThinkPad P1 mobile workstation CPU has 8 cores. An HPC configuration can have hundreds of cores. 

Running multiple multi-core jobs. The cumulative time savings increases linearly with the number of cores used. (Picture courtesy of ANSYS)

Above, we see a scenario where an ANSYS customer is running multiple multi-core jobs. Here, the cumulative time savings increases linearly with the number of cores used as multiple jobs are executed simultaneously using HPC.

“This is what I call ‘HPC capacity computing,’” says Wim Slagter, director, HPC and cloud alliances at ANSYS. “Faster turnaround can be achieved which linearly increases simulation throughput and productivity. For this we have HPC Workgroup licenses through which we are rewarding the volume buyer with scaled pricing. Our HPC Workgroup licenses provide volume access to parallel processing for multiple jobs and/or multiple users.”

“Even software with an ideal, linear scalability like our CFD solution, the time to solution asymptotes, “says Slagter. “So, there is clearly a diminishing value of added cores after some point. You may start saving minutes instead of hours, or hours instead of days. Our value-based pricing model clearly accommodates this idea of decreasing incremental value of parallel as the number of cores increases! Our customers pay less as they add more HPC capability.”

ANSYS users who need to take advantage of HPC will receive an estimate of the cost of cloud-deployed HPC up front from inside ANSYS, according to Slagter. We are quite transparent that way.” Users will also get an estimate of the turnaround time before they submit the job.

Looked at Simulation from Both Sides Now

Simulation being pushed forward in the design cycle makes for more what-ifs. Taken to the extreme, generative design will calculate thousands of design possibilities, with each design having satisfied design criteria proven to work by built-in simulation.

As the leading independent software vendor, ANSYS also sees an uptick on simulation in the traditional after-design part of the product cycle.

“We are continually expanding simulation applications across all industries and in more instances,” said Slagter. He gives examples of simulation being used to predict the behavior of more materials (like composite materials), predicting the warping and failure of PC boards, optimizing the airflow around race cars to reduce drag, and understanding connected electronic micro-components.

“For electronics, we can simulate not only electric behavior, current flows, antenna fields,” added Slagter. “But we can also place electronic components and systems digitally in their operating environment. So, simulation has been spreading due to the availability of solutions, such as ours, but also because of HPC, which makes it possible to analyze the performance of larger and more complex systems.”

Another examples of analysis on the front end of design is ANSYS Discovery Live.

“With Discovery Live, simulation is provided in real-time simulation during conceptual design, letting you quickly evaluate changes in design.”

“Downstream simulation is ever more in demand, as data flying in from real-time operations from connected machines with the Industrial Internet of Things (IIoT),” said Slagter.

“We call that expansion in the use of our tools pervasive engineering simulation. So, it’s on both ends. We see significant growth, not only for verification or validation, but also moving up front in the development process to be able to quickly evaluate changes in design, as well as downstream in the product life cycle.”

Infinite Element Analysis

If your simulation is not limited by hardware, you are not trying hard enough.

The very essence of finite element analysis (FEA) and computational fluid dynamics (CFD), the two most widely used simulations, depend on a large number of small elements, or cells, in order to be accurate. And as every engineer knows, you cannot be too accurate. This leads to models with seemingly infinite, rather than finite elements.

Very soon the engineer attempting to simulate learns to compromise his or her model. Let’s not model this detail, or this one. Each detail not considered takes a load off the hardware. Each reduces the fidelity of the model. The very act of defeaturing, as the removal of details is called, sucks up valuable engineering time. Engineers aren’t cheap and defeaturing costs—often ignored or considered unavoidable—can add up to thousands of dollars.

While defeaturing may never be completely eliminated, applying less of it across the engineering team—in effect, allowing a mesher to mesh every little thing—would save a tremendous amount of engineering time. That time, in of itself, could justify much of the cost of a new HPC cluster.

HPC Appliance

Unlike the world of commodity PCs and workstations, where products compete on prices and prices are readily available, the lofty world of high-end computing is far more secretive. It’s not about selling you a product as much as it is about providing a service, with a product (the HPC system itself) that is custom made to your needs.

For the modern engineer or manager, who is used to shopping online, this can easily lead to the belief that HPC is for big companies that can absorb a cost-is-no-object system—no doubt a multimillion-dollar room full of servers. Who wants to hear, “If you have to ask, you can’t afford it” from a haughty salesperson?

But we found one company that bundled its HPC offerings into $100,000, $250,000 and $500,000 solutions. While that may give you sticker shock if you are accustomed to buying workstations at a hundredth of that cost, consider $500,000 to be the rough cost of four engineers in the U.S. doing defeaturing for one year.

An entry-level HPC configuration with 560 nodes for $150,000 from ACT Systems. (Image courtesy of ACT Systems.)

Assuming $150,000 is an entry-level rack-mounted HPC configuration, here’s what Advanced Clustering Technologies of Kansas City, Mo., will provide:

  • 14 server “blades,” each with 2 Intel Xeon Gold 6230 processors, each with 20 cores, for a total of 560 cores
  • Memory, storage, and additional hardware
  • HPC network software

The $150,000 system is a 1,000 lb. behemoth that uses 10kW of power, requires 3 220V, 30 Amp circuits, and generates 36,000 BTU of heat.

For more details, contact ACT sales at https://www.advancedclustering.com/.

An HPC network can vary in size from a room full of computers shown earlier to the form factor of a single server, as shown in this HPE cluster “appliance” configuration. (Image courtesy of ANSYS.)

A cluster HPC configuration, or an HPC appliance, may be just the solution for a midsize engineering firm that does a lot of simulation.

“We have partnered with hardware and service providers, including Dell EMC and HPE to release what we call ‘plug and simulate’ clusters,” explained Slagter. “These are preconfigured with ANSYS simulation and job management software. Think of them as ‘cluster appliances’ because they are basically turnkey appliances that reduce the time and cost of putting an HPC system together—finding and buying the hardware, configuring, testing, etc. A cluster appliance is very easy to acquire.”

“And easy to maintain,” he added. “It can be maintained by internal IT, if a customer has one, or it can be externally managed by one of our partners or system integrators that are part of our HPC system.

“HPC is not easy to get, or easy to use,” admitted Slagter. “With these partnerships, we are trying to take away the barriers to HPC adoption.”

Rolls-Royce

Rolls-Royce is best known for its legendary automobiles, but you are more likely to be transported by the company’s jets engines rather than its internal combustion engines. The company is the third biggest manufacturer of turbofans (behind GE and Pratt & Whitney). The aviation industry, first to use FEA, continues to make big demands on simulation software.

Rolls-Royce turbofan showing the interstage cavity where the heat produced by the fluid flow is used as a boundary condition for the structural members. (Image copyright Rolls Royce The Jet Engine.)
Rolls-Royce, no stranger to HPC, found itself maxed out of internal HPC resources when performing multiple iterations of fluid flow coupled with structural simulation (multiphysics) on its latest turbofan design. The company had to find HPC resources outside its walls, settling on HPC service provider CPU 24/7 GmbH. The simulation utilized 32 cores for the CFD while only needing one core for the structural simulation. It was a move that led to a 5 times reduction in calculation time, according to ANSYS.

Rent or Buy, Getting on the HPC Train

While big firms in the aerospace and automotive industries can justify buying an HPC configuration, build facilities for them, and maintain it all with an IT staff, small and medium-sized companies will do well to rent, rather than buy, HPC. Renting may make sense for big companies when they run out of computing resources, as Rolls-Royce did.

Amazon Web Services (AWS), Google Cloud Platform, Microsoft Azure, Nimbix, Rescale and Penguin Computing On Demand are some of the leading vendors for using HPC on the cloud on an as-needed basis.

Amazon, the HPC leader according to Cloud Lightning, will create a virtual cluster of HPC hardware for you to run your next job. AWS lists CFD as one of its main HPC applications. You can spend $0.75 an hour trying your application in the AWS sandbox.

It may be the uncertainty about the final cost of analysis that keeps users from taking the plunge into HPC. It would be nice to know how much an analysis will cost. First-time users may be tentative, questioning whether HPC providers will run up the bill.

F1 race car simulation on HPC (Picture courtesy of ANSYS.)

HPC service providers do provide online calculators, but the final cost in dollars proved to be elusive. Therefore, we have no choice but to make a wild guess. Assuming that wall time is equal to CPU time, the 140 million-cell CFD simulation above, renting a 128-core setup from Microsoft Azure for 1,850 seconds would cost under $4. The price on AWS would be similar.

In App Cloud Simulation

ANSYS offers two ways to use HPC on the cloud if you don’t want to buy your own hardware. From within several core ANSYS applications, you pick ANSYS’s own HPC service, the ANSYS Cloud, on which to run the solution. Or, you can pick one of over 10 ANSYS cloud hosting partners.

Switching to a cloud solver from within the application seems to be the easier approach.

When the workstation is not enough. Dialog box for doing the calculations on the cloud. (Image courtesy of ANSYS.)

Let’s say somebody is using ANSYS Electronics Desktop and they realize the problem is too large or will take too long.

“Once they reach the limits of their desktop computer, they can basically select a menu and start selecting from a few standard configurations on the Azure cloud,” explained Slagter.

The ANSYS user can select small, medium, large and extra-large cloud-based server configurations that correspond to 8, 16, 32 and 128 cores, respectively.

“Users don’t have to go to Azure directly, or set up an agreement with Azure,” said Slagter. “It’s all handled by ANSYS. They pay through a single license key for software and also for cloud hardware cycles.

“It is extremely easy on ANSYS Cloud because it is our own solution,” continued Slagter. “We have developed that user interface, whether it is with ANSYS Mechanical, ANSYS Fluent or ANSYS Electronics Desktop. If users reach the capacity limits with their on-premise computers with a computationally demanding job, they can easily switch to the cloud and run on it. That is really easy because we developed a seamless interface to the Microsoft Azure cloud.

“They can post-process it on the cloud, too, so there’s no need to immediately download the results. They can do some lightweight reviewing of the simulation results in the cloud and if they want to do in-depth post-processing steps, they can transfer the results back to their desktop machine for further investigation.

“If customers want to use HPC on a partner-managed cloud solutions provider, there may be a few more steps required,” explained Slagter.

Partners Provide

ANSYS has qualified several partners that will provide a solution on the cloud for ANSYS applications. “They have proven their solutions, they have developed their own simulation environment for ANSYS, which is compliant with our platform requirements, and, of course, they also make sure that it’s as easy as possible, said Slagter. “Of course, it can’t be as easy as it is with the ANSYS Cloud, but for customers who need to have HPC on premise—a significant portion of our userbase—we have an option for them.”

The TotalCAE portal will let you select a model file and submit it to their HPC hardware. (Image courtesy of TotalCAE.)

While access to the HPC partners is not from within the ANSYS software, some partners do tailor their applications to make it easy for ANSYS users. For example, TotalCAE has a CAE portal that lets you select your model file and, when you push a button, it submits the job to their HPC hardware. TotalCAE checks your license file and manages your CAE licenses file. No coding is required. The company makes no mention of pricing, however.

Another CAE-oriented HPC partner is Cray. The legendary name in supercomputers is also getting into HPC with the Cray CS (for cluster supercomputers) series for “extreme” HPC. Cray seems more intent on selling the HPC hardware than renting it, but makes no mention of cost.

HPCBOX by Drizti HPC menu. (Image courtesy of Drizti.)

One very ANSYS centered HPC partner is Toronto-based Drizti with its HPCBOX. Like TotalCAE, Drizti also manages the software licenses, and users are not required to create even one line of code. HPCBOX, while not push-button easy, uses a “Connector” interface that appears to provide more control. Drizti also makes no mention of pricing.

Rescale, an 8-year-old San Francisco startup, claims to have the largest HPC infrastructure network in the world, with over 8 million servers. It is a remarkable growth and may be a testament to the need for a total service, one that lets you rent engineering solutions as well as the HPC hardware it runs on. Rescale does let you use your software license, if you have one. You can drag in an input model or select one. A vast menu of simulation software includes ANSYS, Siemens, Dassault Systèmes, MSC, OpenFOAM, and many more.

ANSYS Fluent T-junction mixing example done with Rescale, a leading HPC cloud service provider for engineering simulation, shows a hot water and cold-water inlet, with mixing and calculation of temperature at the outlet. (Image courtesy of Rescale.)

We wondered what engineering software was not supported. Rescale interactively lets the user pick the processors, number of cores, etc., and displays the price of the simulation, making us wonder why this was so hard for everyone else to do the same. A quarter hour with 144 cores would cost less than $5.

Try It, You’ll Like It

So eager is ANSYS to get you to try HPC that it will run your ANSYS CFX, Fluent, HFSS, Maxwell3D or Mechanical simulation on an HPC configuration so you can compare its times to what you got on your laptop or desktop workstation. Check out the details here. And if engineers want to try HPC on the cloud themselves, they can go to www.ansys.com/cloud-trial.