AR Is Better Suited for Engineering Use Than VR, Claims Report

LiDAR point cloud used for autonomous vehicle navigation.

3D digital images projected into thin air are no longer a futuristic gadget seen only in spy movies. In fact, augmented reality (AR) is commonplace in all sorts of different industries from engineering and health care to marketing and everything in between. AR is an interactive experience that involves overlaying the real environment with digital displays such as text, video, audio and other forms of media. Using glasses, smartphones, projectors or other devices, AR users can interact with virtual “objects” in real time, as if these objects are in the physical world. AR differs from virtual reality (VR) in that the user experiences a blend of the real and the virtual world and can control their own presence. In VR, the user is controlled by the system and placed in a completely digital world. AR can be said to provide a cross-over between the analog nature of the real world and digitized computer images and objects.

In 1998, AR was first used commercially to place yellow “first down” markers on the football field during televised games. It was also used in the televised sports of swimming and track and field to show the real-time progression of the current world record so that viewers could get a sense of the performance of the athletes competing in the race in comparison to the performance of record holders.

Since then, AR use in various sectors has grown significantly and these technologies are now readily available for consumers around the globe. AR is especially useful in the engineering sector to help guide visualization and design, manufacturing or assembly processes, factory planning and quality control. It can be employed to help with troubleshooting, maintaining and repairing damaged equipment or machinery and can even be used for virtual training and simulations.

How Does AR Work?

But how does this seemingly complex and futuristic technology work? In fact, the answer is quite simple. AR systems use a technique called 3D mapping, which allows the system to merge the physical or “real” world with the virtual one. This 3D space anchors real-world coordinates in the virtual space by creating a digital representation overlaid on the physical world.

A common component of many AR systems is a light detection and ranging (LiDAR) sensor. LiDAR is a method that utilizes electromagnetic radiation (EMR) to render a 3D representation of the surrounding environment onto a computer. The EMR is in the wavelength range between ultraviolet, visible and near-infrared light and is emitted from a sensor and dispersed into the environment. The EMR is reflected off objects and is subsequently returned to the sensor. This process is repeated millions of times, with repeated measurement of the time it takes for the beams to travel to the object and back, from which distance can be calculated. This is done simply through a CPU clock time measurement and the speed of light to calculate the position.

This manner of computation can even use doppler shifting techniques to determine if an object is in motion and in which direction it is traveling based on the shifting red or blue of the reflected laser. Millions of samples later, a 3D map of the environment can be generated based on something called a point cloud, which is a digitization of an analog object based on the aforementioned techniques. Each individual beam creates a discrete point on the surface of the object. The more points there are in the point cloud per cubic meter, the more accurate the digitization of the object will be. The whole process takes only a few seconds, allowing for real-time rendering of the digital overlay. The real-time component of AR is a key feature of these systems.

Uses of AR

A real-world example of LiDAR usage that requires fast computation speed in industry is autonomous vehicles. LiDAR sensors allow the computer to render a highly accurate map of the immediate environment while the car moves through it with the intention of performing safe and seamless navigation. This includes the mapping of both stationary objects and objects in motion.

Similarly, using a combination of artificial intelligence (AI) and AR, the platform Proximie allows surgeons to remotely “scrub in” on surgeries being performed in real time anywhere in the world. Surgeons can even use AR holograms to visualize patient anatomy in 3D without having to perform invasive procedures. The implications of such technologies are paramount in advancing all sorts of fields.

Enter AREA

Augmented Reality for Enterprise Alliance (AREA) explains that “3D models or point clouds can lower the cost, time and developer training to view an object or environment with AR information such as instructions, warnings, or routes overlaid on the physical world. Despite its relatively young presence in the enterprise sector, AR technology has rapidly evolved into a powerful tool with broad versatility and a thriving community of experts.”

As previously mentioned, the 3D map generated using LiDAR is represented by point clouds—an assemblage of tiny points plotted on the 3D map, with each point representing a single discrete point on the real-world object that the laser reflects off of. Just like pixels, a highly dense point cloud allows for a more detailed digital representation of the object.

Point cloud registration is the process of converting the point cloud data into an X, Y, Z coordinate system. The point cloud is then prepared using a processing software to build a 3D map of the object or environment in question. Linear algebra techniques (Euler rotations and quaternions) are used to compute the motion of AR objects for real-time manipulation and display on screen for the user. The mathematics involves translating the coordinates from a body-embedded inertial reference frame in each respective geometry to a global coordinate system that is used for on-screen display.

An example of this is the IKEA Place smartphone application that allows users to place AR furniture in an empty room in their house or apartment using a smartphone camera. Users can translate and rotate the furniture in order to determine how it will fit in a room and for the best effect before purchasing the furniture. The application knows the dimensions in 3D space of the furniture and is able to map out the dimensions of the room using the smartphone’s camera. In this way, a user does not need to worry about any potential dimensioning issues occurring after buying the furniture.

IKEA Place smartphone application using AR for virtual furniture placement simulation.

On March 7, AREA published a “3D Mapping Solutions for Enterprise ARreport that was prepared by a research team from the National Institute for Aviation Research (NIAR), Wichita State University. Two main arms of research were conducted. The primary arm involved surveys and interviews assessing the utility of 3D mapping with AR within enterprises and the engineering arm investigated 3D mapping with AR software and hardware. AREA explains how several different business problems from all sorts of different industries can be solved using this technology. These cross-industry use cases include navigation, remote assistance, situational awareness, simulation, virtual user interface, visualization, maintenance, guidance, collaboration, inspection, training and assembly. The utilization of this technology has significant implications for advancing not only the engineering industry but also many others, which were made apparent by AREA’s identification of these cross-industry use cases. 

Perhaps the most interesting use for AR though is the use of 3D models for simulations.

“This use case pertains to using Augmented Reality to simulate (within and in interaction with the real world) the insertion or repositioning of things using 3D models. The 3D models can be of weather patterns, energy flows, industrial equipment, infrastructure (such as HVAC) or moving objects in a zone or confined space (e.g., factory), complex processes requiring employees to lift or perform a process with an odd shape, including serious games (overlaps with training), simulation of packing diverse objects within volume (e.g., for shipping). Many simulation use cases overlap with visualization use cases and simulation can be used in skill development (training) use cases.”

AR vs VR

AREA explains that before AR, companies had to rely on VR. The pitfall of VR is rooted in the fact that simulations are only successful when the user believes they are in the real world and acts accordingly—a feat that is difficult to achieve with VR. Not only is it costly and time consuming to develop a VR simulation that incorporates properties of the real-world environment, but it also requires special VR facilities and greater training times. AR solves these problems by using 3D mapping, which allows the user to immerse themselves in the simulation while still remaining grounded in the real world. AREA explains that one of the best examples of mimicry of the real world is the production of digital shadows in the simulation produced from the AR object, which makes the object seem as if it is truly in the outside world. In this way, AR users can employ this technology as a substitute when they do not have access to a real object for training, exploration, design and review purposes.

Next Steps for AR

Although AR is a compelling solution to many different business problems, it is not 

without its pitfalls. Interviews with AREA employees and other 3D mapping and AR experts revealed several limitations. There is room for future research and development to improve the current technologies so they can be used to their greatest potential. These limitations were primarily centered on scanner properties and included low or uncertain precision, low usability and lack of data interoperability. 

Precision limitations can be placed into one of three categories: scanning accuracy, geolocation (locating the user in space) and drift (difficulties in anchoring the virtual object in the real environment as the user moves around).

Limitations in usability largely concern environmental factors and noise such as lighting or air quality, which can influence the quality of the resulting scan.

Finally, the data and software from these 3D scanners are often exclusively supported by the software and hardware from their vendor, making it difficult and costly to convert, or rather completely incompatible with other commercial software on the market.

Future research from AREA will aim to overcome these limitations and develop an AR 3D scanning system that can be easily and efficiently implemented in all use cases. This endeavor will only be achieved once AR technology is sufficiently evolved for precision and accuracy, can be utilized in a controlled and noiseless environment and is designed for an open and vendor-neutral format. To remain competitive in the market, AR companies will need to find unique solutions to overcome the main hurdles associated with AR 3D mapping. 

With recent advances in AR and continuing research, new developments are on the horizon. In fact, AR is often complemented by AI, which can simplify the processing of environmental data in a way that may be more accurate than a human-made model. As engineers continue to combine AI with AR, they will find ways to leverage the capabilities of each technique and seamlessly integrate them into numerous different systems with the end goal of solving commonly experienced inefficiencies in daily life.