Intelligent Transport Systems. From Research and Development to the Market Uptake. Third EAI International Conference, INTSYS 2019, Braga, Portugal, December 4–6, 2019

Research Article

Towards Dynamic Monocular Visual Odometry Based on an Event Camera and IMU Sensor

Download
178 downloads
  • @INPROCEEDINGS{10.1007/978-3-030-38822-5_17,
        author={Sherif Mohamed and Mohammad-Hashem Haghbayan and Mohammed Rabah and Jukka Heikkonen and Hannu Tenhunen and Juha Plosila},
        title={Towards Dynamic Monocular Visual Odometry Based on an Event Camera and IMU Sensor},
        proceedings={Intelligent Transport Systems. From Research and Development to the Market Uptake. Third EAI International Conference, INTSYS 2019, Braga, Portugal, December 4--6, 2019},
        proceedings_a={INTSYS},
        year={2020},
        month={1},
        keywords={Event-based camera Monocular Visual-odometry IMU},
        doi={10.1007/978-3-030-38822-5_17}
    }
    
  • Sherif Mohamed
    Mohammad-Hashem Haghbayan
    Mohammed Rabah
    Jukka Heikkonen
    Hannu Tenhunen
    Juha Plosila
    Year: 2020
    Towards Dynamic Monocular Visual Odometry Based on an Event Camera and IMU Sensor
    INTSYS
    Springer
    DOI: 10.1007/978-3-030-38822-5_17
Sherif Mohamed1,*, Mohammad-Hashem Haghbayan1, Mohammed Rabah2, Jukka Heikkonen1, Hannu Tenhunen, Juha Plosila1
  • 1: University of Turku (UTU)
  • 2: Kunsan National University (KNU)
*Contact email: samoha@utu.fi

Abstract

Visual odometry (VO) and visual simultaneous localization and mapping (V-SLAM) have gained a lot of attention in the field of autonomous robots due to the high amount of information per unit cost vision sensors can provide. One main problem in VO techniques is the high amount of data that a pixelated image has, affecting negatively the overall performance of such techniques. An event-based camera, as an alternative to a normal frame-based camera, is a prominent candidate to solve this problem by considering only pixel changes in consecutive events that can be observed with high time resolution. However, processing the event data that is captured by event-based cameras requires specific algorithms to extract and track features applicable for odometry. We propose a novel approach to process the data of an event-based camera and use it for odometry. It is a hybrid method that combines the abilities of event-based and frame-based cameras to reach a near-optimal solution for VO. Our approach can be split into two main contributions that are (1) using information theory and non-euclidean geometry to estimate the number of events that should be processed for efficient odometry and (2) using a normal pixelated frame to determine the location of features in an event-based camera. The obtained experimental results show that our proposed technique can significantly increase performance while keeping the accuracy of pose estimation in an acceptable range.