Dual Radar: A Multi-modal Dataset with Dual 4D Radar for Autononous Driving

Xinyu Zhang, Li Wang*, Jian Chen, Cheng Fang, Guangqi Yang, Yichen Wang, Lei Yang, Ziying Song, Lin Liu, Xiaofei Zhang, Bin Xu, Zhiwei Li, Qingshan Yang, Jun Li, Zhenlin Zhang, Weida Wang, Shuzhi Sam Ge

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

4 引用 (Scopus)

摘要

4D radar has higher point cloud density and precise vertical resolution than conventional 3D radar, making it promising for adverse scenarios in the environmental perception of autonomous driving. However, 4D radar is more noisy than LiDAR and requires different filtering strategies that affect the point cloud density and noise level. Comparative analyses of different point cloud densities and noise levels are still lacking, mainly because the available datasets use only one type of 4D radar, making it difficult to compare different 4D radars in the same scenario. We introduce a novel large-scale multi-modal dataset that captures both types of 4D radar, consisting of 151 sequences, most of which are 20 seconds long and contain 10,007 synchronized and annotated frames. Our dataset captures a variety of challenging driving scenarios, including multiple road conditions, weather conditions, different lighting intensities and periods. It supports 3D object detection and tracking as well as multi-modal tasks. We experimentally validate the dataset, providing valuable insights for studying different types of 4D radar.

源语言英语
文章编号439
期刊Scientific data
12
1
DOI
出版状态已出版 - 12月 2025

指纹

探究 'Dual Radar: A Multi-modal Dataset with Dual 4D Radar for Autononous Driving' 的科研主题。它们共同构成独一无二的指纹。

引用此