Can we have some AI in design software? Part 1: Trimble

Trimble is serious about getting AI into its software. To prove it, we are introduced to Karoliina Torttila, the company’s Director of AI. It is a new position at Trimble, but Torttila has been with Trimble for 7 years. She was previously the company’s AI Portfolio Manager. She is a native of Finland and an alumnus of Cambridge University with a Master’s Degree in Construction Engineering.

An offer to discuss AI with someone who could implement it in a way that makes it useful for the design community was not to be turned down.

Torttila may be the busiest manager on Earth. The list of demands for adding AI into design software come from her management, product managers … and pesky media who want to hear about little else.

We’re glad to hear that SketchUp features prominently in Torttila’s AI roadmap. At the conference, Trimble was showing Diffusion, which generates a stylized image based on a concept. Trimble had, prior to the conference, introduced the ability to find 3D models in its 3D Warehouse with AI-powered image search.

What follows is our conversation with Torttila.

What has Trimble done with AI on design?

On the design side, we see a large difference between the conceptual design and the constructible design. On the conceptual design, that’s our SketchUp business, we have put out quite a bit of AI for the creative exploration of design concepts—SketchUp Diffusion, for example.

SketchUp Diffusion, which lets you generate a stylized image somewhat automatically, is Trimble's first big implementation of generative AI. Image: Trimble.

Why is it called Diffusion?

The generative AI explosion was really kicked off by Stable Diffusion a year ago. That was for the text-to-image work. It let you create beautiful images from text prompts a little bit before ChatGPT.

Okay, that’s just art images. When are we going to get AI to help us with useful modeling? You know that “Hey, ChatGPT, finish my building” ad? That is a stretch, of course. But what about “Hey, SketchUp, finish my roof?”

We are at a point where we can create conceptual SketchUp models with natural language. I can say, “Hey, large language model”—whether it’s GPT or something else doesn’t really matter—“create a script that generates a simple house in SketchUp.” I can edit that concept with natural language like “Make the roof flat” or “Make the house a little bit wider, “Add a couple of windows….”. It’s nascent, but that technology is there and a lot of our APIs allow for large language models to be plugged in.

Why do I need to use CAD commands? CAD is so hard to learn. It needs to be more natural. I’m trying to do a house remodel using SketchUp, but I get bogged down right away with the commands.

That’s an interesting area that is still nascent, but it is emerging. But let me tell you about some of the AI work that we put out in the world.

Tell me more about image search in 3D Warehouse.

Drag an object from a model or a photograph into 3D Warehouse and it will find a similar part. Thanks, AI. Image: Trimble.

"Let’s say you’re doing interior changes and want to experiment with a specific chair or a loveseat in the house. It can be hard to find the exact furniture via just text search. Image search in 3D Warehouse allows you to go from images to the most similar 3D models. Of course, there's potential for a similar workflow not just in SketchUp and 3D Warehouse, but also in our MEP and Tekla Warehouses. While we often start with SketchUp; visually connecting pictures and 3D components is relevant across the Trimble portfolio.

I can imagine that to be a directive from your CEO: Let’s get AI in everything, everywhere, but let me be selfish about design for a minute. Here is my dream. I want to make a house, for example. I have an idea of the space, the layout, what it should look like from the outside. I may want to start with a photo of another house, say to SketchUp to make a model of a house like this and hold up my iPhone. I want SketchUp to be able to create that house from that photo, create all the details that it can, like the framing, the rafters, beams, joists. It should know that 2 x 4s will be used in the framing and how far apart they should be because it knows where I am, what the conventions are for residential construction, what sizes are used for joists. Is that too much to ask?

There's a difference between technology readiness and commercialization readiness, and the former is the only part that I’m speaking about. We’re not that far off, honestly.

So, by the next Trimble Dimensions? You know that Trimble is going annual, right?

We need to speed up our development. Technology is starting to get there, but it’s a combination of many technologies. I think you know while we’re able to take a picture and go from a picture to a simple 3D model….

Let me ask you about creating models with standard parts or shapes, like 2 x 4s in SketchUp, or I-beams in Tekla. That’s what the software has been lacking. People are jumping ahead“Make me the whole house or building,” but I don’t want to do that. I just want it to fill in the details I don’t want to bother with. Like the rivets or screws in an assembly. I don’t need to search the whole 3D Warehouse for them. They might not be there. I sat in a session yesterday on the subject of finding shapes in 3D Warehouse and someone in the audience who, after seeing all the furniture available, asked, “How’s it with pipes?” Because he wanted pipe fittings, and they were not available.

I think there is a broader content play we could make at Trimble. Yeah, we still have a little bit of siloism in terms of the content in the MEP Warehouse.

It seems that recognition of basic shapes would be a much easier problem to solve. Everybody has a standard library of shapes. Engineers have one. Construction guys have one. In there are elbows, pipes, screws, ASCII shapes.… Shouldn’t be too hard to use them.

“There’s infrastructure that needs to be built out for you to bring your own model library to match ours. From the AI perspective, we are there. From the system perspective, where you can add your library … it’s still a little ways away. But we are committed to that direction. Think about some of the generative work in SketchUp Diffusion, where we kind of paint on top of things. You saw in the keynote that we can add AI-generated materials to your surfaces. So, you can create your geometry and then generate marble to this or teak wood from Indonesia to that. Generative AI allows for an endless variety of material renderings.”

So important, right? A lot of engineers will say it’s just pretty pictures, though.

"It is largely. It is part of the spectrum of the work in the SketchUp space. On the constructible side, with Tekla and Quadri, the focus is more on automating repetitive actions. Things like rebar detailing. This is less generative and more predictive. We're able to suggest specific rebar layouts based on what you've done previously, for example."

That’s great. Adding internal detail is the type of repetitive job no engineer or architect wants to do. We don’t want to spend hours, days, or more putting rivets in holes or rebar in concrete the same way in one project after another.

It’s on the constructible side that we see a lot of the pain being taken away. Then, maybe more fundamentally, parametric modeling. That’s nothing new, but adding AI in the mix of parametric modeling can bring about optimizations, whether it is for reducing your carbon or whether it’s increasing the amount of offsite-manufactured components or whatnot. Parametric modeling provides the foundation and AI is able to vary the parameters to hit on an optimum configuration. There’s quite a bit of difference in where AI is applied in conceptual design versus constructible. Both are exciting, but they’re very different worlds.

So, Trimble’s implementations of AI have mostly been in SketchUp?

We’re seeing AI more on the SketchUp side, just because on the conceptual side, we can move a little bit faster because with a concept you have a greater tolerance of variation. You can tweak a conceptual design [and] one way or another, you will tolerate more than one concept. This happens with SketchUp. On the structural engineering side, the Tekla users….

They’re not that tolerant. They’ll dismiss something if it doesn’t give the right answer.

Right. That doesn’t mean we won’t see AI being injected into other workflows as well. But the speed that AI goes down that pipeline is slower. And some would say it’s a good thing because, you know, when we’re building our bridges, we want to be certain of the structural integrity.

What’s the Trimble view on large language models for design? Are there any? I’m thinking of customer data stored on the cloud, 3D Warehouse, etc.

We view large language models as a user interface to any consumable data. There are some domains we are looking at and will provide offers to our customers. We talked about conversational modeling. If you have that, you don’t have to learn all of the modeling software specifics. In the same way, what if you are facing a submittal extraction from contracts? You have a 4,000-page document to go through to create all the submittals.”

I suppose the reverse would also be possible? Create a report from a lot of data, such as you get from an analysis. After I check my stresses and the structure is sound, I’ve lost interest, but I still have to write the damn report. That could be automatically generated?

That’s the exciting part about generative AI. We can look for the best opportunities to plug AI into. Because we’re not optimizing for a single task anymore, because we can optimize broader tasks, processes, skill sets. A little bit of it here, a little bit of there. That’s why I think of large language models as a user interface to almost any of your data.

See part 2 of this interview here.