Stepping into next generation ADAS multi-camera architectures

December 14, 2016 // By Thorsten Lorenzen, Texas Instruments
Highly integrated approach to achieve extended synchronization and advanced HDR image quality to enable Automotive Surround View & Mirror Replacement Applications.

Introduction

Multi camera systems are common for the development of Surround View and Mirror Replacement Applications. Today's systems can have six cameras or more to observe and identify the scene around the car. In surround view, typically a minimum of four camera streams will be stitched together for the best driver experience. This architecture normally interconnects megapixel image sensors with one central applications processor by bridging the long distance over single coax cables. In addition, the market demands pixel precision at a higher dynamic range to further optimize vision algorithms. This requires an additional step of pixel post processing.

Therefore, next generation systems feature central clock distribution, video line synchronization and multi-channel pixel processing providing much better video quality to the driver. In particular, the coaxial cable’s back channel capability plays a significant role. In order to achieve all this chip-sets of multiple LVDS serializers combined with a single deserializer hub feature highly integrated solutions. This paper discusses how video stream synchronization can be achieved and how pixel precision can be increased.

 

Image Sensor Technology

Today’s CMOS color imagers provide 1-megapixel or 2-megapixel on a single-chip for automotive applications. They record highly detailed full-resolution images, providing video streams up to 60 frames per second (fps). The sensors may use split pixel HDR technologies, in which the scene information is sampled simultaneously rather than sequentially. It minimizes motion artifacts and delivers superior image quality in RAW output in demanding and difficult lighting conditions. In order to extend pixel precision to up to 12 bits of dynamic range, the sensor reads out up to three exposure values per pixel, "long (L)", "short (S)" and "very short (VS)". These are used to further process high dynamic range images when fed into an image signal processor (ISP). Currently, image sensor manufacturers are moving the ISP function out of the imager chip in order to limit power dissipation impacting the quality of the image. Hence, ISPs will be integrated into the vision processors (SoC) or remain as standalone devices. With advanced HDR capabilities, ISPs can even process multiple camera streams concurrently, limiting the number of ISP devices in the system. In particular, centralized processing of such a kind can benefit from multiple-stream processing during equalization steps. To accomplish this, camera streams need additional indication for channel separation. Otherwise, the video streams may get mixed up in the system path.

Design category: