3D image sensor uses time-of-flight method

December 10, 2015 // By Martin Lass, Infineon Technologies
En route towards autonomous driving, it is imperative to gather all the information about what is happening both inside and around the car. Besides driver state monitoring, capturing 3D data inside a car facilitates entirely new human machine interface (HMI) concepts.

In advanced driver assistance systems and en route towards self-driving cars, precise information about the driver’s attentiveness as well as the situation inside the vehicle is an essential requirement. Using a sophisticated 3D camera, the car captures the driver's behaviour and forwards this information to the advanced driver assistance system. If the driver closes their eyes or is not looking straight ahead, it may then trigger an alarm. If the driver fails to respond, it activates the emergency brake assist. At the same time, the 3D camera chip delivers the directly measured depth data, without having to first calculate these from the angle information, as is necessary for example with a stereo camera, requiring high computing input. The image sensor REAL3 from Infineon makes it possible to realize a system for 3D vision in the smallest of installation spaces – for both indoor and outdoor applications.

Various technologies have been developed for capturing 3D information. With stereo vision, for example, two standard 2D cameras record the scene from various angles and thus calculate the distance (depth). This method offers the advantage that low-cost standard image sensors can be used. However, time-consuming physical adjustment and calibration is required. What is more, highly complex and computationally-intensive algorithms are necessary. Finally, the method is limited in poor and changing lighting conditions.

Another 3D technology uses structured light – i.e. a known pattern is projected onto the scene and the depth calculated from the pattern’s distribution. Although this method offers advantages in the case of multi-path interferences, it does however require a high-tech camera with special active illumination in addition to precise and stable mechanical adjustment between the lens and the pattern projector. The input required for calibration is accordingly also very high. In addition, the reflected patterns are sensitive to optical interferences and textures.

Design category: