Making the Digital Twin Work in Your Application

This episode of Designing the Future is brought to you by Altair.

Engineering is more than applied science, it’s the application of creativity to solve real-world problems. Ideation is the cornerstone of design engineering, but a major difference between good engineering and great engineering is the ability to transfer ideas into renderings and workflows that generate products and processes that are true to the original concept. That concept itself is usually iterated, both in the mind of the designer, and in the development process itself. For most of history of engineering, the interface between ideation and rendering has been an informal process, buried in the mind of the designer. 

Today, that’s changing with new generation of engineering tools that simultaneously impose rigour on the design process, while freeing the designer to explore novel solutions, many of which would be impossible to execute with simple computer-aided design. It’s the digital twin which makes this possible. Jim Anderton discusses the implications of digital twin in real world applications with Keshav Sundaresh, Global Director of Product Management - Digital Twin and Digital Thread at Altair. 

Learn more on how digital twins help companies optimize product performance.


The transcript below has been edited for clarity:

Jim Anderton: Hello, everyone, and welcome to Designing the Future. Engineering is more than applied science. It's the application of creativity to solve real world problems. Ideation is the cornerstone of design engineering, but a major difference between good engineering and great engineering is the ability to transfer ideas into renderings and workflows that generate products and processes that are true to the original concept. Now, that concept itself is usually iterated both in the mind of the designer and in the development process itself. 

For most of the history of engineering, the interface between ideation and rendering has been an informal process buried in the mind of the designers. Today, that's changing with the new generation of engineering tools that simultaneously impose rigor on the design process while freeing the designer to explore novel solutions, many of which would be impossible to execute with simple computer-aided design. It's the digital twin which makes this possible. 

Discussing the implications of digital twin in real world applications is Keshav Sundaresh, Global Director of Product Management, Digital Twin, and Digital Thread at Altair. Keshav brings more than 17 years of customer success and engineering experience to his role for digital twin and model-based systems engineering at Altair, and is responsible for technical thought leadership, strategy, and driving the development of integrated software solution offerings that enable open, traceable, collaborative, and holistic digital twin thread on Altair One. 

Keshav has worked with customers across multiple industries globally on smart systems, mechatronics, robotics, and multi-body dynamics applications. Keshav, welcome to the show. 

Keshav Sundaresh: Thank you for having me, Jim. It's a pleasure. 

Jim Anderton:  Keshav, that's quite a list, quite a resume, and multiple applications, multiple industries. One of the interesting things about talking about digital twin and engineering is that it feels like that legendary, mythical, universal solution. Very frequently when computer-aided design was developed, it was really an aerospace application. It was driven by the aerospace industry, developed originally by one of the major aerospace companies, and then it was accepted there, and then it moved laterally to automotive and other consumer goods and other areas here. But when we're talking about digital twin, we don't usually append it to a specific birthing industry, do we? 


Keshav Sundaresh: No. No, we don't. In fact, based on my experience working with a lot of different customers across different industries, I've come to experience and observe that digital twins mean many things to many people. It has several different forms, and I think there are really three different contexts with which you can look at, I guess, creating a framework around digital twins. 

I think the first context is what I call scope or scale. Depending on who you work with or what products you develop, digital twins can be made of a physical process, and so by physical process, I really mean it could be of a part, a subsystem or an interconnectedness of the subsystems into a product or how the product would interact with an environment in terms of a process. But digital twins doesn't necessarily get restricted to mocking up something digitally for a physical process in terms of capturing the elements and the dynamics. 

It also spans into the biological process. So in terms of modeling the human anatomy or the physiology marks or having a library of virtual patients and a virtual test bench to optimize for health. But then digital twins also expand further in terms of having a Digital Twin of a customer from a business process standpoint. So to give you an example, I think you and I use credit cards every day, and there is actually a digital twin of us residing in our own banks where our activities and our transactions are being monitored and based on our transaction history, anomalies get detected, fraudulent activities get detected and so on and so forth. So the first context with which we've seen customers use, apply, and benefit from digital twins is the pure scope or the scale of it. The second context in our experience has been the purpose and the system life cycle. 

So we have seen a lot of our for instance, manufacturing customers start from a product definition digital twin, which is more like a specified version of what the final product should actually be doing. So you would start with let's say, a voice of customer document that summarizes the key functional and the non-functional requirements a product should posses, and you would have an abstract first principles understanding of how the total system should function. Now as the product matures, so you can move on from an as-specified version of a twin to an as-designed version of the twin, which is where you try to model the various elements and the dynamics of your mechatronic system. 

So it's not just about simulating individual domains, but understanding how these different domains interact with each other and perform as a whole. Right? But then once you mitigate all your technical risks in your as-specified and your as-designed phases of the digital twin, you can then have an as-built configuration. 

So you might have a physical prototype and you might have some test data in an emulated form, for instance, that tracks a certain set of KPIs or behaviors. So you would want to come back into your virtual systems model and tune the model to mock up the real world behavior. But then once you refine your prototype, you would want to mass produce it. So there is an as-manufactured variant of the twin, which is where you would have applications around augmented reality or virtual reality, where you would want to come up with training simulators to train operators to help optimize for maintenance, for instance. 

But last but not the least, you have the as-sustained version of the twin. So as you release these products out to customers, the customers start using your products. There is a lot of physical sensors with data being captured. So now, you could do a round trip with all the physical sensors from the customer usage and behavior and essentially have a digital representation, albeit a machine learning or an AI model that can predict, for instance, the future states. 

So really in our experience, the second context to a digital twin is the system life cycle itself. There are what I call purpose driven models that customers build depending on where they are in this map. But then the third context, which is arguably more important, is one of the key differences in our experience working with customers between a digital twin and a virtual prototype is that digital twins add value to our own customers as customers. 

So that is perhaps an abstract term called an as-a-service component, but really the key purpose of developing digital twins is to optimize for health, if you're looking at a biomedical system or a healthcare system or optimize for service, if you're looking at a product development type of a system, or it could be to optimize for production, if you're looking at a manufacturing facility and you're looking to optimize the throughput or the quality or minimize the downtime, or it could be to optimize for engineering, because at the end of the day, it's important to again, have that feedback loop of how your customers use your product and you start to increase the overall product quality and performance. 

Jim Anderton: Keshav, it's interesting, you brought up several things which resonate pretty heavily with me. I come from manufacturing originally, and there's always an issue when developing a new product with things like tolerance stack. We have individual components which go into a sub-assembly. The sub-assembly goes into an assembly, the assembly goes into a finished product, and it is possible if the tolerances fall the wrong way, all four components or sub-assembly are perhaps at the high end of a controller at the low end, But in the end, you have a non-functional product or product that doesn't fit well. 

Then the regressive, of course, design engineering process is to establish which part or component or sub-assembly do we have to pull into tighter control to make the entire system work And historically, that is a very difficult thing to do with a complex product. So the risk minimizing strategy, of course, was to pull multiple things into tighter control to make sure that we hit the target. That of course adds additional costs. In that example in particular, is a digital twin a way to optimize the process before we actually hit the green button and start production? I mean, could we minimize risk at that level? 

Keshav Sundaresh: Absolutely. I mean, that's one of the key benefits of applying digital twins, even before I would say the concept level, right? Having a holistic understanding of how the complete product functions or behaves is crucial in terms of minimizing the number of design errors that you would discover at the Fagan of the process, right? I read this book a few years back where the author said, "You don't have to be right all the time. You just have to be less and less and less and less wrong." Right? 

So it's all about minimizing the number of risks that you might foresee or that you might see at the Fagan of your product development process. Or even worse, once your customers start seeing these product failures, you usually have more risk that you have to mitigate for. 

But I can expand on the scope a little bit more, Jim, and I can talk about organizational behavior, if you will for a second, because as much as digital twins and the practice of digital twins is about having a process of integrating different types of data streams from a physical asset all the way down to different types of virtual assets, for me it's also about cross-functional collaboration and using a common model as a primary means to collaborate as opposed to using informal documents. 

So to give you an example, we worked with a customer, a wind turbine manufacturer where the research and development cost, or rather the spend on research and development kept going up and up year over year. But at the same time, they discovered that the warranty costs for their products was also going up and up. 

Jim Anderton: Not uncommon in many industries. 

Keshav Sundaresh: Right. So they were puzzled. They were like, "Well, on the one hand we are investing more in research, but on the other hand, we also see a lot of complaints, warranty complaints." So they decided to collaborate with us and to understand what's going on, right? So what we discovered in partnering with them are really two major challenges. One is what I call horizontal silos, and the other is what I call vertical sites. 

So what we noticed with this team, or rather teams, is that from going from the product definition, which is usually done in a requirements management environment or an enterprise data store, to having different concept design models and then further down into verification and validation and manufacturing and in service through these different functional areas, if you will, the primary mode of collaboration and communication between these different user groups where through informal documents. 

We even have a tagline, we call it the Microsoft Office Engineering Suite. Okay, I mean, people use the tools that they know, but at the end of the day, they toss out the report and they say, "Hey, look, this is what you have to really look into." 

That can lead to multiple sources of truth and it won't really give you the traceability to have a clear status of where the program is headed and a clear stock of what are the number of assumptions we have made and what are some of the errors that we are yet to capture through digitization or virtual prototyping, if you will, right? So that's what I call horizontal silos, which is really, instead of having an inconsistent or an informal way of collaboration, there is a need to have a common model or a model based systems engineering type of a practice. 

But then on the flip side, just in terms of the models itself, a mechanical engineer really has his head down to focus on I want to create the best mechanical system possible. The electronics engineer, same story. The thermal engineer, same story. So we realized that a lot of these groups had a very strong understanding of modeling, analyzing, visualizing and optimizing their respective domains. 

But when it came to understanding how these different domains interconnected and what was I guess the total system dynamics or the living and breathing evolution of the models, they didn't really have such a framework developed. So breaking these vertical silos is an activity for digital twins in our experience, and breaking these horizontal silos is the practice of model based systems engineering, and by extension, digital thread. 

Jim Anderton: Interesting you put that, as you're describing it in the horizontal and vertical silos immediately came to mind were matrices. I imagine sort of systems of differential equations and that desire to compress it down as small as possible so you can resolve the damn thing. Now, from an engineer perspective, you brought up a couple of interesting points that immediately came to mind. 

One is that the constraints in the design process, the design end, are frequently time and money. So in a perfect world, we would like to iterate our way a thousand times to achieve perfection, but the reality is we may be able to iterate four or five times within the six weeks allotted or the $2 million of budget allotted, and so it injects some conservatism at the same time. 

Now you're talking about a possible world where it may not be necessary to actually design for perfection before you start production because you can in real time pull information back from the end user and then integrate that feedback into the redesign process. So the redesign process which historically is complete by the time that you start making something now becomes a redesign process that may extend over perhaps the entire life expectancy of the product or service. 

Keshav Sundaresh: Absolutely. I mean, there is this whole world of connecting the engineering world with the world of data analytics, right? The angle with model based systems engineering, and by extension digital thread, is basically the synergy between engineering and IT, if you will. We want engineers to do more IT/project management type of a thing without really feeling like they're doing IT work. 

But in terms of developing more accurate and more reliable digital twin models of whichever system that you're trying to develop, it is important to have an open architecture where regardless of which cloud vendor you choose, regardless of which IoT environment you use, regardless of what type of sensors you want to track, you have an open enough system to stream that information as inputs to a virtual representation or a virtual model that in a way has the same core but contextualized to the environment or to the customer's usage. 

So when you close the loop between the world of let's say physical sensor data to the world of real-time machine learning or AI models or real-time physics based digital twin models, you are in a way bound to have what is known as an intelligent digital twin because you're no longer relying on the previous assumptions that you made for your product, so to speak. You're actually using real data, if you will, from the customer or from the physical sensors that's being streamed during the usage to monitor the performance, the health and the status of the equipment or the machine or the asset. 

Jim Anderton: Yeah. That's an interesting approach. We know with Internet of things and the catastrophic sort of that enormous collapse in the cost of sensors, we can embed them in large numbers in products large and small. But historically, sensor feedback was really a matter of analog devices that sensed levels, things that were usually converted into usually an analog voltage signal or just simple analog to digital converter, and there was a bitstream that was fed into some central processor somewhere, which could be manipulated. 

Now, we're looking at a world in which these sensors are not only microscopic in size, but they're relatively intelligent. So they actually do some of the signal processing at the sensor level. Yet those sensors might be sold by one of dozens of different possible vendors and dozens of different applications within the same product. And I hear frequently from engineering firms that have difficulty in collating and assembling that data or even sifting out, which is actually relevant information over which is not. How does a Digital Twin digital thread play into that? Can that really helps sift out the rubbish from the gold? 

Keshav Sundaresh: It can. So I think there are a couple different ways to look at this, at least in my point of view. 

Number one, if you have let's call it an idealistic definition or an idealistic reference point in terms of how your product should behave, then you'd have a corresponding physics based representation, if you will, of the model or the asset. But in terms of doing anomaly detection, if you will, to figure out whether the product is really wandering off of the failure trenches, whether the product is about to fail and so on and so forth. 

You can then start using failure data and train a bunch of neural networks and embed these neural networks either on the edge where the competitions can happen in real time or in forage where it can happen inside a real time visualization and dashboarding environment like an IOT environment. Or the training process can also happen offline where you have all the data collected from these sensors and you use this information to train a machine learning model, but then embed that logic into a living and breathing digital to system. 

So that's really the first angle, which is you can start with having a holistic understanding of how your product should ideally behave. You have in a way captured that behavior in a digital representation or I would say a digital dynamic representation. And you just keep appending to it over time and checking whether or not the data that you're receiving is off track or is anonymous or not. 

But then the second track is if you really don't have any past history of the product and if you are really starting from just the physical asset itself and you want to do some data driven discovery, there are also methods, solutions and practices out there for you to be able to use low code, no code platforms to quickly do your data prep to quickly figure out, well, what are some of the signal processing or the statistical analysis checks that I need to make? And also to send out different types of alerts back to the user in terms of over the air updates or in terms of updates through your phone or a specific device, and so on and so forth. 

Jim Anderton: Keshav, we're about the process. There are other aspects of engineering as well. One is the project management or the management of the engineering process. I read a stat once that said that 10 years into most professional engineers careers only about a third of them are actually still doing engineering. The rest of them are actually managing engineering processes. 

And there's an incredible irony. It drips of irony to me that the most experienced and best engineers are not actually doing engineering in many circumstances. They're actually just attempting to herd the cats and get a team to move forward in the correct direction. 

Are we talking about technologies that semi-automate that management process? Is it still going to require one individual who stands over a monitor and says, "No stop. That's good enough. Move on to this aspect?" 

Keshav Sundaresh: That's a great question. The the best way to answer that, Jim, is to go back to some of the classes that I took from Stanford on behavior psychology. There is this model or a framework, if you will, where Dr. BJ Fogg spoke about where he said, "Motivation alone is not enough to get things done." Because at any given point in time, your motivation can go up and down depending on the mood, depending on how you feel. 

So, while motivation is one access of behavior, the other very important point of consideration is the ability. See, the more hard something is to do when the motivation is high you might have the energy, the time, and the effort to do the challenging things. But when you really don't have high motivation, but if the thing to do is extremely hard, it's just a matter of time where you'll actually give up or go back to your old habits. 

So for me, habit formation is actually at the core of project management and by extension the practice of model based systems engineering and or Digital Twin. Okay? And so I've come to realize that the easier, the more frictionless you make the process of even generating reports or importing data or automatically creating architectural models, people will at least try and see the value in it as opposed to always being skeptical about it or basically run away from even experimenting. 

But to be more specific, Jim, if there are for instance, requirements, management tools that essentially track tens and thousands of product requirements, there are systems engineers within enterprises that are in a way and project managers that are bookkeepers to track all these requirements and the evolution of it in terms of performance, cost, mass, what have you. 

But then there are sub teams that are only responsible for a handful of these requirements. Not one guy would be responsible for all 10,000 plus requirements of an automotive system or an electronic system, et cetera. 

So what we've seen our customers want to do is to quickly be able to extract a subset of the requirements into a format of their choice, Microsoft Word, Excel, XML, I-REC, which is another open standard, and quickly bring that information, that set of what I call document centric requirements. And quickly render them into a structural model which captures the overall static structure of what are the different product subsystems. How are the parts connected to the subsystems? 

It can also very quickly render the leaf level requirements required for a specific subteam. It can also render down different types of behavior models like creating a use case diagram, developing an activity diagram, creating a schematic diagram, so on and so forth. 

But as you move down the ladder, now, you know are making it as easy as possible because you have the ability to just inherit various types of documents, various types of bill of material, logical decomposition, if you will, into actual models. You can start increasing the fidelity of it and start making it more mature over time. 

Jim Anderton: It's funny you mentioned it. My first engineering job and many of my generation was configuration control. A very, very boring, frustrating task of actually physically making sure that everyone was using the correct version of the rendering. And if we are in iteration F, to make sure that every blueprint actually was version F. 

And that required a separate infrastructure of documentation to track the distribution of physical blueprints. Because the were legacy parts which from automotive manufacturers that were still done on paper even though we had CAD. And that literally meant that someone had to make sure that we physically removed prior iterations from the hands of individuals in multiple locations to make sure that we were all on the same page, that single source of truth. 

And to do that required a process, which itself ended up as an engineering project which generated its own part numbers. So the document to control configuration had itself a part numbered. It became a separate sort of engineering product itself. And you can see how the process begins to spin out of control until soon you're no longer designing pumps. You're designing processes to control processes that design pumps. Are we talking about a way to get away from that sort of bureaucracy heavy stultifying effect? 

Keshav Sundaresh: Absolutely. I think with the ability to leverage, I guess, new and modern ways of applying convergence, if you will, across simulation, high performance computing in AI, but also in terms of just being able to extract metadata at wherever the metadata resides in, be it your desktop, be on a server, and so on and so forth. There are newer, modern, simpler, and straightforward processes that are now available to really have different groups and enterprises move away from a document centric collaboration process to a common model centric systems engineering process. Documentation should always be a side effect to a product. Documentation should never be the first and the only thing that engineers do. So with some of the solutions that we've seen our customers use for instance, documentation is just created automatically as you start using a model as a common form of collaboration and communication. 



Jim Anderton: Keshav, we've talked at the 60,000 foot level about a subject that is so fascinating, so diverse. I think we could have drilled down to any one of these dozen or so topics there and talked for hours about just that one. But I've got to ask as a concluding question, perhaps a fundamental question here, which is, is this technology going to change the way engineers ideate, think, design, and develop new products and services? There are those in the popular press who claim that this is another form of automation and we're going to push a button and AI and generative design will engineer the future and there will be no such thing as an engineer anymore. Do you believe that? 

Keshav Sundaresh: Well, I wish I could predict a future, Jim. I mean, I'm just a simple engineer who just loves to solve problems. For me, it's about solving user level problems, understanding the delta of where our customers are today and where they want to go and take these small risks or take smaller bets in life. But I do see that the more we continue to integrate technology with psychology, you're actually going to get more people to at least start experimenting if not standardizing on these practices. But I'm also really certain, Jim, that with the amount of data that we have of customer usage, it's going to fundamentally change. If not, it's already changing how people are looking at developing new products because you have so much information that's just lying around that can be used to capture higher order elements in terms of understanding and create knowledge based models. And not just leverage or not just rely on expert knowledge to take better decisions or to take newer risks in terms of developing new products. 

Jim Anderton: Incredible future. Keshav Sundaresh, thanks for joining me on the program. 

Keshav Sundaresh: Thank you so much for having me, Jim. 

Jim Anderton: And thank you for watching this episode of Designing the Future. See you next time. 

Learn more on how digital twins help companies optimize product performance.