How to develop an FPGA-based Embedded Vision application for ADAS, series of blogs – Part 1

FPGA “The winner for the low-power and high-performance vision-based applications”

Farhad Fallahlalehzari, Applications Engineer
Like(2)  Comments  (0)

When should we use the term “Vision for Everything”, as vision-based applications are entering various industries? It’s been a few years since the emergence of Embedded Vision and we see that it’s being used in a wide range of applications including Security, Medical, Smart homes, Robotics, Transportations, Automotive Driver Assistance Systems (ADAS) and Augmented Reality (AR).

 

This is the first in a series of blogs explaining what you need to know to start designing Embedded Vision applications which can be used in ADAS, from choosing the right device and tools to demystifying the vision algorithms used in automotive applications and how to implement them into FPGAs.

 

ADAS consists of two main parts, vision and sensor fusion. Cameras used in a smart car can provide the information such as object detection, classification and tracking. However, they don’t provide the distance between the vehicle and obstacles needed to prevent a collision. To do that, sensors such as LIDAR or RADAR come to play.

 

In this series of blogs, we will mainly focus on the vision side of the ADAS; but will cover sensor fusion in the future. The main goal of this series of blogs is to give an in-depth knowledge of Aldec’s complete ADAS reference design which includes 360-Degree Surrounding View, Driver Drowsiness Detection and Smart-Rear View.

 

Device and tool selection
In this section, popular devices used for Embedded Vision are investigated and at the end the most suitable one will be identified along with the right tools and development board to start designing the ADAS solution.

 

For Embedded Vision applications CPUs, GPUs, FPGAs, DSPs, ASICs and microcontrollers can be used. However, there is a big war between FPGAs and GPUs because of their high performance graphical capabilities. This war has always been about the tradeoff between power consumption and performance.

 

Due to the immense progression of HW, SW and algorithms used in Embedded Vision, re-configurability plays an important role which is supported by FPGAs. These devices are not only superior to ASICs - by offering the low cost and fast acceleration solution because of the millions of programmable gates and hundreds of I/O pins, but they are also better than CPUs; which have to time-slice or multi-thread tasks as they compete for compute resources (by providing the simultaneous acceleration of multiple portions of a computer vision pipeline).

 

In a nutshell, the proliferation of vision applications demands high performance, low-power and reprogrammable processing systems like FPGAs. We shouldn’t disregard the ease of programming for CPUs and GPUs. However, the SoC devices can provide us with a combination of FPGAs and CPUs.

 

I want to introduce you to the Xilinx All programmable Zynq™ 7000 and Zynq Ultrascale+ MPSoC, both of which comprise HW and SW. I have written a dedicated blog about this architecture which you can find here. Because of its unique features, the Zynq is an efficient solution for an Embedded Vision project; and particularly for ADAS, since the acceleration of vision algorithms in the HW side of the Zynq makes a huge difference in terms of overall agility and power consumption. Xilinx SDSoC tool enables the user to partition the vision algorithms into the SW and HW automatically. Another tool which also eases the way is Vivado HLS. This is a high-level synthesis tools which converts the C/C++ codes into HDL. This makes life easier for software engineers using Zynq devices.

 

Choosing the most appropriate Zynq-based embedded development board is crucial as it helps you get the best out the device you are working on. In this regard, Aldec has brought 30+ years of experience into the embedded system design arena by producing Zynq-based embedded development boards called TySOM. These cover a wide range of Zynq chips, from theTySOM-2-7Z100 board (which includes the largest Zynq chip in the 7000 family) to the TySOM-3-ZU7EV board (which includes the Ultrascale+ MPSoC). As mentioned, we will cover the Aldec’s ADAS reference design which includes a 360-degree surround view capability, Driver Drowsiness Detection and Smart-rear view functions.

 

Popular ADAS vision functions
An ADAS solution can include one, more or all the following functionalities based on the complexity of the system..

 

  • Driver Drowsiness Detection (DDD): is based on face- and blink-detection, which will warn the driver if he is showing signs of becoming sleepy.
  • Lane Departure Warning (LDW): is based on the lane detecting algorithm and if any lane violation happens by the driver subconsciously and the sign signal is off, this system warns the driver.
  • Pedestrian Detection (PD): is based on object detection algorithms and tracks pedestrians. If a pedestrian is detected close to the path of the vehicle, the system warns the driver, and can even apply the brakes.
  • Forward Collision Warning (FCW): this is also based on an object detection algorithm which can detect the presence and movement of several vehicles s at once. Note: as the system is based on cameras and distance needs to be provided through RADAR and LIDAR sensors and algorithms.
  • Traffic Sign Recognition (TSR): this application detects traffic signs such as Stop-Go lights, speed limit signs, construction environments, etc., and warns the driver if his/her driving does not factor in the meaning of the signs (if applicable).
  • Intelligent High Beam (IHB): this system automatically lowers the high (full, Europe) beam headlights if they are not required or are likely to distract oncoming roads users.
  • Smart-Rear camera (SRC): this system warns the driver if an obstacle is detected while the vehicle is reversing.

 

Driver Drowsiness Detection
We have gone through advantages of using Zynq™ device as the main processor of our ADAS solution and we introduced the required tools to use and the most appropriate embedded development board To start. We also studied the popular ADAS functions.

 

Having provided the above introduction, for this (our first blog), let’s explore the DDD functionality (which is also supported in Aldec’s ADAS reference design) in more detail.

 

FMC-ADAS-functionalities

Figure 1: FMC-ADAS daughter card 

 

This demo can be run on any of our TySOM boards, including those featuring Zynq-7000 family and Ultrascale+ MPSoC devices, that support FMC daughter cards. It also uses the FMC-ADAS expansion card, a FMC HPC VITA 57.1-2010 compliant daughter card, which includes five FPD-LINK lll used for High-Speed cameras and LIDAR sensors plus peripherals for LIDAR-Lite and Ultrasonic sensors as you can see in image 1. This design contains five main stages as follows:

 

  • Image acquirement: At this level, 16-bit YUV 4:2:2 images with resolution of 1280x720@30fps from the driver’s face are collected using Video4Linux. Both the Zynq’s ARM core-based processor and its FPGA elements are involved in this process. The FPGA element contains ISP Xilinx FPGA IP which helps take the raw images, interpolates them, applies color balance / noise reduction and conditions the image before being stored.
  • Image Pre-Processing: After grabbing the images from the camera, before applying any image processing algorithms, we would remove the unnecessary data in images. To do this, the color space conversion is used to transfer the RGB images to YUV to grayscale as we don’t need that much info. Then, to adjust the image intensity and enhance the contrast, histogram equalization is applied.
  • Facial analysis: Now we have the required data which can be used to analyze the face. In this stage, face and eye detection is applied using Pixel Intensity Comparison-based Object detection (PICO). After detecting the eyes, the number of blinks is also detected to realize if the drive is drowsy. All the algorithms are provided in C/C++ and are partitioned into the Zynq using SDSoC tool.
  • Decision Making: After extracting the required data, the ADAS should make the decision; i.e. whether or not to warn the driver. If the driver is showing signs of drowsiness, a signal can be output to an audio device (e.g. buzzer). The decision making is done using the ARM core-based processor, which is faster and easier to work with.
  • Output on the screen: In this stage, by using the HDMI frambuffer output with DRAM, the results of the detection can be shown on the screen with 1280x720 resolution at 25fps.

Driver-Drowsiness-Detection-processing-detail

Figure 2: Driver Drowsiness Detection processing steps

 

In this blog, various devices to be used in embedded vision was compared, and in result, Zynq FPGA was chosen to be the most efficient device to meet the demands for acceleration and power consumption. Furthermore, a proper embedded development board was introduced along with the right tools to start designing. Main ADAS functionalities were investigated, and at the end, a driver drowsiness detection application designed by Aldec was described into details. In the next part of this blog, 360-degree surrounding of a car will be studied.

Farhad Fallah works as an Application Engineer focusing on Aldec’s Embedded Systems and Hardware Prototyping solutions. As a technical support engineer, Farhad has a deep understanding of developing and debugging embedded system designs using Aldec’s TySOM boards (Xilinx Zynq based embedded development boards). He is also proficient in FPGA/ASIC digital system design and verification. He received his master’s degree in Electrical and Computer Engineering, concentrating on Embedded Systems and Digital Systems Design from University of Nevada, Las Vegas in year 2016. 

Comments

Ask Us a Question
x
Ask Us a Question
x
Captcha ImageReload Captcha
Incorrect data entered.
Thank you! Your question has been submitted. Please allow 1-3 business days for someone to respond to your question.
Internal error occurred. Your question was not submitted. Please contact us using Feedback form.
We use cookies to ensure we give you the best user experience and to provide you with content we believe will be of relevance to you. If you continue to use our site, you consent to our use of cookies. A detailed overview on the use of cookies and other website information is located in our Privacy Policy.