How to Fix Simulation by Going Lean

Mark Zebrowski, a former automotive simulation analyst, suggests that simulation can allow zero hardware prototypes if used strategically. (Image courtesy of Mark Zebrowski.)

What’s Wrong with Simulation, What Happens if It’s NOT Fixed, and How to Fix It

This is an attention-grabbing title, especially for a presentation at a conference full of simulation analysts. The conference in question was the Conference on Advanced Analysis and Simulation in Engineering (CAASE) 2018, held earlier this year in Cleveland, Ohio. The presenter was Mark Zebrowski, a retired Ford analyst with some strong opinions on the state of simulation.

In a conference full of presentations with names like Submodeling of Thick-Walled Structures with Plasticity and CFD-Based Optimization of Micro Vortex Diodes, it’s no surprise that Zebrowski’s presentation drew a substantial crowd. The title is almost a real-world version of clickbait (audience bait?), and it was enough to convince me and about 100 other attendees to see what Zebrowski had to say.

As it turned out, he had a lot to say. Much of it went against the grain of other CAASE presentations, specifically on the topic of simulation democratization. It was a refreshingly unique viewpoint in a somewhat homogenous setting. It also was clearly appreciated: Zebrowski won the attendee-voted Best Workshop award at the end of the conference.

The crux of Zebrowski’s viewpoint is that simulation should go lean. Lean refers to a specialized approach used successfully in industries like manufacturing (lean manufacturing) and currently being explored in industries like health care (lean health care). What lean means in the context of simulation, and what it has to offer, was the driving force of Zebrowski’s CAASE presentation.

Tactics and Strategies

Zebrowski claims that simulation, as a business, is not in a good state. He’s got a slew of war stories from his days at Ford, each of which identifies poor simulation techniques as costing millions of dollars, adding months of delays and wasting hundreds of engineer hours. It’s not because the analysts at Ford were bad at their jobs or using outdated tools. According to Zebrowski, the system itself, the ways in which simulation was used, was short-sighted and ineffective. He believes this problem affects the entire simulation industry.

One statistic Zebrowski is fond of bringing up is that 87 percent of people think in terms of tactics, whereas only 13 percent think in terms of strategy. Basically, almost nine out of 10 people take the short-term view. Only a small minority looks ahead at the long term. For simulation to thrive as an effective component of an engineering business, Zebrowski argues that strategic thinking is a necessity.

But what should the strategy for simulation be? If a business case can be made for the use of simulation in any engineering endeavor, simulation must answer the question being asked:

  • at the required level of detail
  • in a reasonable time
  • at a cost less than alternative methods
  • and allow design decisions to be made based on simulation predictions alone

If simulation can’t meet these criteria, then why bother with it at all? We can use these criteria as a guide for our simulation strategy. The goal is to develop a system of simulation that can justify its business use case.

In Zebrowski’s view, a strategy already exists that can help meet this goal. A strategy called lean.

Lean Simulation

The idea of lean production has roots ranging from Benjamin Franklin to Henry Ford to Kiichiro Toyoda, founder of the Toyota Motor Corporation. However, the term lean was only coined relatively recently by engineer John Krafcik in 1988. In general, lean refers to the goal of reducing waste in a system without sacrificing production.

The idea of lean simulation is to strategically eliminate waste from simulation. According to Zebrowski, there’s a lot of waste that can be cut. He gives the example of a particular vehicle that was in development at Ford. The simulation team had worked up some noise, vibration and harshness (NVH) models and built a prototype. However, when they took it for a test drive, it shook and vibrated terribly. Not to be deterred, the engineers ran some new simulations and built a new prototype. Upon driving the new version, they found the vibration and shaking was almost as bad as the original. In the end, the team abandoned the simulations altogether and finished the vehicle the old-fashioned way.

This is an example of simulation failing not once but twice, resulting in a massive waste of time and capital, not to mention a lack of confidence in simulation at large. Had the original computer models been correct, the waste would have been avoided. But what’s the strategy to ensure models are correct the first time around?

The Rework Cycle

Everybody makes mistakes. Even experts make mistakes, as Zebrowski’s example proves. Once these mistakes are discovered, they must be corrected. According to him, the average time between making and finding an error is nine months. Often, the inevitability of mistakes, and the extra work it takes to fix them, is not given due consideration.

Enter the rework cycle. This is a simple map that relates “work to be done” to “work that is done” by taking into account the rework needed to correct errors. Nobody makes a mistake on purpose. Until mistakes are discovered, they’re considered “undiscovered rework.” Once they’re discovered, the “known rework” is added back to the pool of “work to be done,” along with any work dependant on those mistakes. In this way, a cycle is created: the rework cycle.

The rework cycle. (Image courtesy of Mark Zebrowski.)

We can quantify the rework cycle to provide a guide for when the work is really finished. We’ll need one number first: the average amount of quality work we can complete in any given cycle. This “quality rate” can be found through testing and experiment. Zebrowski found his team’s rate was similar to that of the defense industry, 40 percent. That means that 60 percent of work done on any cycle needs to be redone in the next cycle.

Even with such a low quality rate, it doesn’t take too many cycles before we can properly finish all tasks. Let’s break this down with a simple example. You have some project with some number of tasks to be completed and a 40 percent quality rate. After one cycle, you’ve successfully completed 40 percent of tasks. The 60 percent that need rework are brought to the next cycle. After the second cycle, 40 percent of those are successfully completed, bringing your total amount of finished tasks to 64 percent, and so on.

Quality rate: 40%

Cycle

Tasks to be done

Tasks successfully done

Tasks requiring rework

Total tasks completed with high quality

1

100.00%

40.00%

60.00%

40.00%

2

60.00%

24.00%

36.00%

64.00%

3

36.00%

14.40%

21.60%

78.40%

4

21.60%

8.64%

12.96%

87.04%

5

12.96%

5.18%

7.78%

92.22%

6

7.78%

3.11%

4.67%

95.33%

7

4.67%

1.87%

2.80%

97.20%

8

2.80%

1.12%

1.68%

98.32%

9

1.68%

0.67%

1.01%

98.99%

10

1.01%

0.40%

0.60%

99.40%


From the above chart, we see that only 10 cycles are required to bring us to over 99 percent of tasks finished with a high degree of quality. By planning for rework, it’s possible to ensure high quality in the final product.

However, no one in a production simulation environment plans for 10 iterations of their work. Thus, one of the goals of lean simulation is to improve your quality rate. If you can bring the quality rate from 40 percent to 85 percent, you can save a huge amount of time:

Quality Rate: 85%

Cycle

Tasks to be done

Tasks successfully done

Tasks requiring rework

Total tasks completed with high quality

1

100.00%

85.00%

15.00%

85.00%

2

15.00%

12.75%

2.25%

97.75%

3

2.25%

1.91%

0.34%

99.66%

Now, we only have to complete three cycles before arriving at 99 percent quality work. Even the first iteration, with 85 percent of tasks completed with high quality, can be used to guide design decisions—though perhaps not sign off on them. Thus, increasing the quality rate by any means possible is a critical component of lean simulation.

Democratization as an Outcome

Zebrowski found lean simulation to be an extremely effective strategy. His team was nearing the point of no longer needing physical prototypes. Unfortunately, the 2007 recession brought a blow on Ford that resulted in Zebrowski’s strategy being shut down.

That’s one of the reasons Zebrowski is so keen to spread his ideas. He has firsthand experience of their success. These days, he’s one of the only people advocating this type of simulation strategy. Many other are focused less on simulation strategy and more on the specific simulation tools. Zebrowski warns against getting caught up in this mentality, comparing simulation software to LeBron James’ shoes. James isn’t an expert at basketball because of his shoes; his shoes simply allow him to exploit his full expertise. In Zebrowski’s view, focusing on tools at the expense of strategy will do nothing to help your bottom line.

In a similar vein, Zebrowski criticizes the current focus on simulation democratization. He believes that if you do simulation properly—i.e., strategically—democratization is a natural consequence. It should not, in his opinion, be a goal in and of itself. If even the experts reliably get simulation wrong, what sense is there in putting simulation in the hands of non-experts? Only once a simulation model is mature will it naturally lend itself to broader use.

Zebrowski has a lot more to say on the topic of lean simulation. If you’re interested in learning more about the concept, you can send him an email or leave a comment below. Zebrowski also has a website, currently under construction, with more information: leansimulation.net.