Autonomous Driving – Starting from the Top Down

(Image courtesy of Mentor Graphics.)
It may have taken a century for us to get to this point, but it’s no longer a question of if we’ll have autonomous vehicles, but when. Still, the technology needed to put self-driving cars on the road is not a trivial matter—it involves multiple sensor modalities and huge amounts of data.

One way to approach the engineering challenges of autonomous vehicles is from the bottom up: start with advanced driver assistance systems (ADAS)—like adaptive cruise control (ACC) and lane departure warning systems (LDWS)—and integrate them until you reach the Holy Grail that is Level 5 autonomous driving.

However, from a processing point of view, that approach faces a scaling challenge: the cost, complexity and latency of processing the data from an arithmetically increasing number of sensors increase exponentially because each new sensor comes with its own distributed processing.

According to Amin Kashi, director of ADAS & AD at Mentor Graphics, “If you’re trying to scale up to higher levels of autonomy, the problem you have is that the output of the processed data from various sensors is not identical. For example, processed data from a camera is different from processed data from a radar, and trying to fuse data from two different sensors requires a lot of processing power.”

The alternative is to take a top-down approach when it comes to processing: start with a platform that  transmits unfiltered data directly from the sensors to a central processing unit. This centralized system can reduce the latency of data transfer from sensors to processing, with the added benefit of making sensors less expensive by removing the need for them to have individual processing capabilities.

That’s the idea behind Mentor’s DRS360, an automated driving solution that can capture, fuse and utilize raw data from multiple sensor modalities in real time.

“At the end of the day, it reduces the cost by not having a lot of processing required at the edge nodes,” said Kashi.

The DRS360 platform is engineered to meet the safety, cost, power, thermal and emissions requirements for deployment in ISO 26262 ASIL D-compliant systems. DRS360 deploys a Xilinx Zynq UltraScale+ MPSoC device in the first generation, accommodating SoCs and safety controllers based on either X86- or ARM-based architectures. The result supports fully automated driving within a 100-Watt power envelope.

Although Level 5 autonomous driving has yet to be achieved, the DRS360 is designed to be ready for it. “We wanted to see what architecture was needed to achieve the highest levels of automated driving,” said Kashi. “That’s the goal with this platform.”

For more information, visit the Mentor Graphics website.