Automotive Radar Advancements with AI – Advancing Object Detection and Tracking (Part 2)September 25, 2023
In our previous blog (Automotive Radar Advancements with AI – Part 1) , we established the groundwork for understanding the transformative impact of AI on radar signal processing. Now, in this second part, we shift our focus to how AI, particularly Deep Learning models, is revolutionizing the way vehicles can perceive and interact with their dynamic surroundings. This blog also demonstrates how the advancements in driving safety enhance the capabilities of autonomous driving.
Benefits of using radar in Automotive
In contrast to cameras, radar is more reliable in challenging environmental conditions. Frequency Modulated Continuous Wave (FMCW) radar, operating within the Millimeter Wave (MMW) band, has greater capabilities to penetrate through fog, smoke, and dust. This is attributed to its huge bandwidth and high frequency, which provide robust range detection.
A few methods to employ radar include:
- Utilizing a Support Vector Machine (SVM) for the classification of radar objects with the objective of distinguishing between cars and pedestrians.
- NN approach is used to extract features from a Short-Time Fourier Transform (STFT) heatmap of radar signals.
- A statistical Constant False Alarm Rate (CFAR) detection algorithm can be used with a CNN-based VGG-16 classifier.
However, the above methods focus on classification tasks, which assume only a limited number of known objects within the scene. These may not be suitable in complex driving scenarios with noisy background reflections such as those from trees, buildings and traffic signs where these methods are susceptible to generating false positives.
Intelligent Object Detection and Classification
Object detection and classification are pivotal functions in autonomous driving systems, enabling vehicles to comprehend their environments and make informed decisions. Previously, we discussed the unique advantages of radar in automotive applications. In this section, we will explore the innovative solutions driven by AI that are shaping the evolution of object detection and classification through radar.
RODNet – For Intelligent Object Detection and Tracking
RODNet is an innovative and robust radar object detection network designed to identify objects across diverse driving conditions. This technology is applied to enable autonomous or assisted driving, eliminating the need for a camera or other sensors. It comprises of the following:
- Customized modules of CNN: These are implemented to effectively harness the special properties of Radio Frequency (RF) images.
- RF image (Range – Azimuth heatmap): This technology utilizes RF image instead of radar data points (such as point clouds) due to RF image’s capability to offer comprehensive information about object texture, angles, and more.
- Camera-Radar Fusion (CRF) supervision framework: This is used for training the RODNet, taking advantage of the camera-based object detection and 3D localization method facilitated with statistical detection and inference of radar RF images.
- The RF Image is first passed to the M-Net which is proposed to combine the chirp-level features into a frame level, refer Fig 2(d), where C is the number of filters for temporal convolution, H is the Height and W is the Width of the frame, n is the number of chirps.
- Second stage is Temporal Deformable convolution (TDC) since the radar reflection varies with time due to the relative motion between radar and objects, the classical 3D convolution cannot effectively extract temporal features. Thus, TDC is used, refer to Fig 2(e), where T is the number of RF frames.
- For RODNet 3D convolutional neural network (3D CNN) based in the hourglass (HG) architecture with skip connections for feature extraction from RF images.
Why is there a teacher and student architecture?
One of the innovative features of RODNet is its teacher-student architecture. Annotating radar RF images is a complex task for humans, especially when compared to annotating camera data. To address this challenge, a teacher pipeline is used to annotate RF images based on the camera’s image, providing essential ground truth data for training RODNet effectively.
The above figure (Fig 3) shows the teacher’s pipeline combining camera and radar data for RF image object estimation. The student’s pipeline uses only RF images with teacher guidance, applying L-NMS for precise object detection results.
Advantages of RODNet technology:
- RODNet achieves higher accuracy as it is trained with Camera-Radar Fusion (CRF) models, combining radar and camera data for precision.
- RODNet excels in noisy conditions, ensuring reliable performance in challenging scenarios like heavy rain or snow.
- In adverse conditions, RODNet minimizes object detection uncertainty, outperforming traditional camera-based systems and enhancing safety.
The integration of Artificial Intelligence into Automotive Radar technology has paved the way for safer and smarter autonomous driving. As we explore the enhanced capabilities of object detection and classification with innovations like RODNet, we catch a glimpse of a future where vehicles can navigate with unprecedented precision and safety. MulticoreWare’s expertise serves as a guiding force in this transformation.
MulticoreWare’s strengths in Automotive Radar
- Specialized focus in developing advanced algorithms and models for accurately detecting and classifying objects in real-time, crucial for autonomous vehicles.
- A track record of creating AI-powered solutions that enhance the safety and intelligence of autonomous driving systems, reducing accidents and improving efficiency.
- In-depth understanding and proficiency in radar systems, encompassing both hardware and software aspects and enabling us to craft innovative solutions.
- Expertise in signal processing techniques tailored to radar data, ensuring robust and reliable performance in various driving conditions.
- A strong foundation in machine learning, allowing us to develop cutting-edge algorithms for perception and decision-making in autonomous vehicles.
For more information, please contact us at email@example.com