Verification of Driver Assistance Systems in the Vehicle and in the Laboratory

April 01, 2015 // By Andreas Patzer, Vector Informatik
Driver assistance systems acquire the vehicle's environment via a wide variety of sensors. Warnings to the driver or (semi-) autonomous interventions in the driving situation are always made based on correct results of the object recognition algorithms. This article addresses the typical challenges that arise in verifying object data and testing the image processing algorithm. The XCP standard enables the necessary high data throughput in measurement and calibration.

Behind the wheel, humans acquire information about their environment via their sensory organs – specifically their eyes and ears. Signal processing in the brain interprets the collected information, decisions are made, and actions are initiated. Decisions might include whether a space on the side of the road is large enough for parking or whether the distance to the car ahead needs to be adjusted. Driver assistance systems (Advanced Driver Assistance Systems or “ADAS”) support the driver in making these decisions, thereby enhancing safety and improving comfort and convenience as well as economy.

Access to Sensor and Algorithm Data

Driver assistance systems must be able to reliably detect the environment as a type of “attentive passenger”. Radar, ultrasonic and video sensors are very often used to provide information to ECUs on the driving situation or the vehicle's environment. Complex algorithms process the sensor data to detect objects such as road signs, parking vehicles, other participants in traffic, etc., and they initiate actions. To verify the sensor system, it may be sufficient to simply measure the results of the algorithm and compare them to reality. An example here is the distance measuring radar of an Adaptive Cruise Control system: The sensor detects objects by return reflections of the radar beam. The ECU supplies range distance information for each object as coordinates.

In this case, it is not necessary to acquire all of the radar reflections in the sensor. However, all input variables of the algorithm must be measured if the data is being logged for later stimulation in the laboratory, for example. In this case, over 100,000 signals with a data rate of several megabytes per second would not be atypical.

Image processing ECUs with video sensors are used for road sign detection systems or lane-keeping assistants. An algorithm analyzes the video images and detects road signs or lane markings. One typical requirement for data processing in the ECU is a high

Design category: