Design Needs AI and Human Intuition, Part 2. Jumping to Conclusions

In a marathon, a runner taking a shortcut is considered a cheat. In a network, current jumping to another conductor is a short circuit. In court, an argument delivered without evidence is jumping to a conclusion and will very likely lose. But in topology optimization and generative design, jumping to a conclusion would be the most desirable approach.

In mathematics, and by extension, computer programming, a conclusion that is so obvious to humans can be maddingly hard to arrive at. It is the curve that keeps trying to reach its asymptote.

Image: Martin Grandjean from Henri Bergson and the paradoxes of Zeno: Achilles Beaten by the Tortoise?, March 19, 2014.

For example, consider Zeno’s paradox, which states that Achilles will not catch a tortoise because Achilles would need to half the distance to the tortoise an infinite number of times. Mathematics of Zeno’s day and for the next millennia could not solve the paradox. It took a whole new mathematics by Isaac Newton, who gave us calculus and its limits, time increments getting infinitesimal, and function evaluated from zero to infinity, to do the trick.

What’s to stop computer programs from grinding on forever—or to their binary limits—thrashing about in endless loops with diminishing returns, unable to jump to a conclusion? Rather than using calculus, programmers developed inference engines, a system of if-then statements. Inference engines have given way to neural networks. Still, the ability to infer, which humans can do to a fault, remains a much sought-after superpower for computers.

Jumping to conclusions may be considered bad behavior in a social sense, but call the same behavior by another name, inferring, and we can be proud of it. We infer quite well every day, when we read misspelled words and understand them, interpret a rough sketch as an object, solve puzzles.… By contrast, most software programs have zero powers of inference, crashing with the first error in syntax.

Can we somehow give this power to our design software?

AI Conspicuous in Its Absence

What would the Golden Gate Bridge look like if it were designed with shape optimization? Not this good.

AI has made good progress in natural language input and output. For example, there’s Google being able to understand what we meant rather than what we typed into the search bar. Or finishing our query. However, AI applications in design and engineering apps have been limited. There’s Nvidia RTX, which completes a partially ray-traced rendering, guessing the effect that infinite bounces of light will have. RTX has enabled rendering in near-real time, a job that was once done in batch mode.

But where completing the picture is most needed, in shape optimization, AI is conspicuously absent. If AI can complete a sentence and a rendering, why can’t it suggest a spherical shape for a pressure vessel? Engineers know the perfect pressure vessel is spherical, but a shape optimization program will run out of time and cloud credits trying to make one. Other shapes that are known to be optimum for certain conditions, such as round tubes for torsion, round cables for tension, flat sheets of shear, are simple yet geometrically perfect solutions. More complicated shapes, such as I-beams, and lattice structures, such as those found in honeycomb panels, are derived and somewhat optimized. Although design software vendors push what they have, generative design capable of producing strange shapes that are of use only in the tiny intersection of lightweighting, 3D printing and specialized industries—a challenge to improve upon a simple composite shape, that of a bike frame—has gone unanswered for years.

Miracle Grow

A shape starting to form after a fracture. Image: Bill Rhodes from Asheville—mid-shaft humeral compound comminuted fx lat.

Shape optimization starts with a blank volume, loads and constraints. From nothing, it grows into a shape. It should be a trivial example for a shape optimizer to create a straight shape with a round section between a single load and a single fixed restraint. But for reasons that have to do with shape optimization’s roots in natural phenomena, that will not happen.

Shape optimization routines are based on, strangely enough, how your bones heal, rivers flow and roots grow … and nature does not do straight, round or flat within any reasonable tolerance.

Mimicking natural phenomena may have seemed like a good idea to a developer faced with a blank screen and no knowledge of predetermined optimal shapes. Tasked with finding the optimal shape between two points, the original shape optimizers asked themselves, “How would nature do it?” The answer appeared in a vision, a miracle of nature exposed in an X-ray, the ability of our bones to heal themselves.

Osteoblasts go out in random directions from a bone’s fractured surface. Like ships in the night, they have no sense of the shortest path directly but go out blindly and meet at an angle. Altogether, they fuse together in a ball that may be twice the diameter of the original bone. The healing is not completed yet. Over time (measured in months or years), the parts of the ball that receive no stress are absorbed by the body and leave only the fused area of the same diameter as the original bone. A patient will remember the broken bone but willbe left with no bump from it.

Without the luxury of months or years, shape optimizers abandon their optimization with bumps only partially smoothed over.

Monkeys at Typewriters

Without the time to whittle away a shape endlessly, one might think that given enough chances, an optimization program would produce a perfect shape in the time given. But that is the infinite monkey theorem and is hopeless. Assuming a monkey can be trained to hit one of 50 keys at a time, it has a 1 in 16 billion chance of typing “banana.”

But as hope springs eternal in the human breast, we wonder if a smart monkey could finish a Shakespeare play. Enter ChatGPT to a thunderous ovation in a world that has long awaited an AI savior that would one day fulfill the promise of computers: be as smart or smarter than their creators.

Like the other chatbots, or large language models that preceded it, ChatGPT is filled with vast pools of knowledge. Although OpenAI, ChatGPT’s creator, is close-lipped about the size and number of libraries the chatbot has been trained on, it is widely speculated that it has been feasting on all of the printed word that could be found, including the entire works of Shakespeare, as well as libraries of programming code and the whole of Wikipedia.

ChatGPT learned from all that. It can not only regurgitate its knowledge, but it can also do it eloquently and convincingly. Was it just a few years ago that computers learned to pick out cats from pictures? And now, only one cat-year later, large language models (like ChatGPT) are able to answer college-level essay questions in literature, history and more.

Unlike their counterparts in liberal arts, engineering professors have little to fear about their students using ChatGPT to cheat. An attempt to use ChatGPT for simple engineering information failed miserably because it is simply not possible. Chatbots are basically for chatting, their only language is written. Students may use it to find the last winning solar racer, for example, but it will not be able to design the next one.