Nvidia computer as processing hub for self-driving cars

March 20, 2015 // By Christoph Hammerschmidt
The capability of processing awesome amounts of data, combined with deep learning instead of fixed algorithms will be characterize the computers that eventually control robot cars. Even next-generation driver assistance systems will use multiple data sources and apply sensor fusion algorithms to find the right decisions for driving. Chipmaker Nvidia now packs the required number crunching capability for such tasks into a developers board.

The Drive PX computer, initially announced in January at CES, will be available in May for automotive tier ones, research institutes and developers of Advanced Driver Assistance Systems (ADAS). Accommodating two Tegra X1 microprocessors and capable of processing the video signals of up to twelve cameras simultaneously, enables ADAS developers to implement multiple driver assistance functions at the same time, including surround view, pedestrian detection, mirrorless driving, cross traffic monitoring and driver status monitoring.

Figure 1: Deep learning algorithms enable driver assistance systems to intelligently interprete the vehicle's surroundings - a major prerequisite for automated driving.

According to Nvidia CEO Jen-Hsun Huang, the Drive PX will be able to take the feature set of driver assistance systems to the next level, beyond basic classification and driver alerting tasks. Its sheer computing power and memory capacity will enable it to run innovative and more powerful algorithms for advanced tasks associated with autonomous driving. For example, these algorithms will be able to differentiate a vehicle parked at the curb from one that is about to pull into traffic. "The car is not just sensing, but interpreting what is taking place around it - this is an essential capability for auto-piloted driving", Huang said.

For more perfect surround view functionality, the system supports advanced structure-from-motion (SFM) and advanced stitching or better image rendering, avoiding ghosting and warping effects that result if multiple camera images are stitched together.

Figure 2: The developers board accommodates two Tegra X1 processors

The development board itself already has some impressing features: It offers a throughput of 1.3 gigapixel per second. Its two microprocessors access 10 GByte of DRAM. Besides image processing and deep learning algorithms, it handles over-the-air updates, another technology that will be indispensible for future vehicle generations.

Related news:

Piloted driving takes centre stage at Audi's CES presentation

Audi TT - a step towards the software-defined car

More information:

Nvidia blogpost