Why We Need Reality Capture

Town square modeled with reality capture. (Picture courtesy of Leica Geosystems.)

Reality capture is a term for making three- dimensional (3D) digital models of objects that exist in the physical world by capturing their shape—an alternative to modeling their shape.

Reality capture starts with images (photographs and video from cameras) and digital input, such as point clouds from laser scanners (LiDAR).

Who Uses Reality Capture?

Reality capture has applications in several industries and workflows, such as reverse engineering, medical imaging, metrology, AEC (architecture, engineering [structural, civil and building] and construction).

The applicability and benefit of reality capture increase with the scale and complexity of the 3D model to be created. Compare the ease of modeling the town square using reality capture, as shown above, with the effort of creating a 3D model of it from scratch using CAD and BIM.

A reality capture system, either with camera-equipped drones or ground-based laser scanners, merely “shoots” the scene, collecting every architectural detail in view with billions of pixels and points. It doesn’t matter if the building facades have intricate features or if they are plain; it takes the same amount of time and generates the same number of pixels and points. This is why reality capture providers will say the complexity is free. By contrast, the detail in a CAD and building information modeling (BIM) model requires considerable effort, a longer time frame and great skill.

While “shooting a scene” may be simple in concept, in practice, reality capture can involve expensive equipment and, if not CAD skills, skills in managing large data files and visually ambiguous point clouds and meshes.

Why Capture Reality?

Decades of creating buildings and structures with 3D CAD and BIM have resulted in a significant body of work but have still only managed to make digital islands in the sea of the human-built world. It was only a matter of time before CAD and BIM vendors sought to fight back the sea and fill in, digitalizing more of the built environment. And why stop there? Why not show the built objects in the context of their settings, the landscape, trees and more?

Enter Google, Exit Google

A Google Earth car with omnidirectional camera. (Picture from YouTube video.)

The idea of adding the built world to the natural world—all of it digitally—was a Google moonshot that landed. Google managed to digitalize the whole Earth by stitching together satellite and aerial photographs. Then the company had vehicles with 360-degree cameras cruise 10 million miles of streets to capture a large part of the built environment.

This monumental task, unrewarded by financial gain, and arguably the biggest mapping and data-gathering project in history, had insufficient detail for building purposes. Accuracy from Google Earth was plus or minus 15 meters in some places. How can you build with that?

Google hoped to crowdsource the solution by having everyone submit high-resolution models to Google Earth. They purchased SketchUp, the easiest-to-use 3D AEC CAD program at the time (2006) as is still the case now, for this purpose.

But modeling buildings and structures is not for everyone, and Google lost interest in populating its Earth with detailed buildings and 3D cityscapes. It sold SketchUp to Trimble in 2012, a company with an immediate need for modeling land with precision.

Worlds Apart: CAD Models and the Real World

The evolution of digitalizing the real world and digitalizing objects made by humans took decidedly different paths. The shape of man-made objects became a function of the tools used to design and create them. Designs were made with straight edges and compasses on drafting boards by men (as they almost always were) who prided themselves in the perfect lines and character. CAD, with its precision, delivered perfection—its lines perfectly horizontal and vertical, it circles exact—and so extended the aesthetic of geometrical perfection, as long as geometry was composed of a few primitive shapes, lines, arcs and rectangles in 2D and boxes, cylinders and other more complex shapes in 3D. Solid modeling could stretch to extrude and rotate an arbitrary section but would break if anything irregular was attempted.

Our methods of production did their best to create the straight and circular. Milling machines, lathes and laser cutters cut perfectly produced bars, pipes and sheets into similarly square parts and buildings. Building material, rather than bend to the designer’s will, came to bend the designer as homes were laid in multiples of 4 ft, the short side of a 4 ft x 8 ft sheet of plywood. Uniformity and repeatability were the gold standard of the industrial age. A screw is guaranteed to fit a nut and every door its frame.

From within our perfectly plumb home, we view nature—or whatever is left of it. We can see nature and all her imperfect creations. Trees with trunks that are not cylinders, leaves that are of all different sizes, a crooked stream.

Nature, it seems, is quite the bumbler. No two of anything are the same—not leaves, not snowflakes, not the clouds, not the eyes from which we see it all.

And now we have a problem. With nature using no CAD and letting the shapes of things take their own course, it has become impossible for us to document its creations with the primitive objects we have used to design our products and buildings.

What You Can’t Model You Can Scan

Reality capture in marble. The Pietà by Michelangelo Buonarroti. Photo by Stanislav Trayko.

I stared at the great Renaissance sculptor’s carving of David in Florence (Italy) long enough to make my wife fidgety. Same for the Pietà in St. Peter’s Basilica in Vatican City.

Here the human form draped in cloth, the hands so lifelike, is carved out of Carrara marble. How?

Talent, of course. It took the young Michelangelo (only in his twenties), less than two years to extract the Pietà out of a single block stone.

Capturing David. Hexagon Italia used the StereoScan Neo scanner and a Leica Absolute Tracker AT960 laser tracker (not shown) to make a digital model of Michelangelo’s David for display in Expo 2020 Dubai. (Picture courtesy of Hexagon.)

Why couldn’t something like David be created with CAD? A CAD user certainly has more tools to work with than a chisel.

But the prismatic shapes and Boolean operations, the foundation of today’s state-of-the art solid modeling applications, do not lend themselves to describing any of David’s anatomy or the folds of the Virgin Mary’s robe. Nor has anyone volunteered two years of time in the attempt to do so. Scanning David to create a 3D model took just 10 days.

A fine mesh. Triangulated mesh model of Michelangelo’s David created for Ridley Scott’s sci-fi horror movie Alien: Covenant. (Picture from Plowman Craven 3D scan from post in the Victoria and Albert Museum blog.)

To be continued...