
Research Article
A Multi-sensor Information Fusion Method for Autonomous Vehicle Perception System
@INPROCEEDINGS{10.1007/978-3-031-06371-8_40, author={Peng Mei and Hamid Reza Karimi and Fei Ma and Shichun Yang and Cong Huang}, title={A Multi-sensor Information Fusion Method for Autonomous Vehicle Perception System}, proceedings={Science and Technologies for Smart Cities. 7th EAI International Conference, SmartCity360°, Virtual Event, December 2-4, 2021, Proceedings}, proceedings_a={SMARTCITY}, year={2022}, month={6}, keywords={Autonomous driving Sensor data fusion Lidar and Monocular camera}, doi={10.1007/978-3-031-06371-8_40} }
- Peng Mei
Hamid Reza Karimi
Fei Ma
Shichun Yang
Cong Huang
Year: 2022
A Multi-sensor Information Fusion Method for Autonomous Vehicle Perception System
SMARTCITY
Springer
DOI: 10.1007/978-3-031-06371-8_40
Abstract
Within the context of the environmental perception of autonomous vehicles (AVs), this paper establishes a sensor model based on the experimental sensor fusion of lidar and monocular cameras. The sensor fusion algorithm can map three-dimensional space coordinate points to a two-dimensional plane based on both space synchronization and time synchronization. The YOLO target recognition and density clustering algorithms obtain the data fusion containing the obstacles’ visual information and depth information. Furthermore, the experimental results show the high accuracy of the proposed sensor data fusion algorithm.
Copyright © 2021–2025 ICST