Publication

C-Shenron: A Realistic Radar Simulator for End-to-End Autonomous Driving in CARLA

Satyam Srivastava* , Jerry Li* , Pushkal Mishra* , Kshitiz Bansal , Dinesh Bharadia

IEEE Vehicular Technology Conference (VTC) 2025 2025

Vehicle

Section 1

Carla Radar vs C-Shenron Radar

The following image shows a comparison of the radar sensor output from CARLA and C-Shenron. The camera view is from inside the ego vehicle whereas both radar views are in bird’s eye view. Clearly from the image, CARLA radar only provides sparse point clouds whereas C-Shenron provides a dense Range-AoA map.

Carla Radar vs C-Shenron Radar figure

Section 2

High Level Implementation

The following diagram illustrates a high level overview of our sensor integration into CARLA and the evaluation framework for End-to-End Driving.


<p>The Transfuser++ model is the state-of-the-art End-to-End driving model that utilizes Camera and LiDAR sensors for perception and path planning. The model is trained on data from an expert driver provided by CARLA and it predicts the future waypoints/direction and the velocity of the ego vehicle. We substitute the LiDAR input with our integrated C-Shenron radar sensor and re-train multiple models with varying radar views. In our results, we showcase that using radar sensors have improved the driving score and overall situational awareness of the model, indicating the accuracy of our sensor.</p>

Section 3

Sensor Views

Comparison of views from Camera, Semantic LiDAR, and Shenron Radar in CARLA simulator. Like the above image, the camera view is from inside the ego vehicle whereas both radar views are in bird’s eye view.

Sensor Views figure

Citation

Reference

Satyam Srivastava, Jerry Li, Pushkal Mishra, Kshitiz Bansal, Dinesh Bharadia