Loughborough University
Leicestershire, UK
LE11 3TU
+44 (0)1509 263171
Loughborough University

Loughborough University Institutional Repository

Please use this identifier to cite or link to this item: https://dspace.lboro.ac.uk/2134/35651

Title: Robust fusion of LiDAR and wide-angle camera data for autonomous mobile robots
Authors: De Silva, Varuna
Roche, Jamie
Kondoz, Ahmet
Keywords: Sensor data fusion
Depth sensing
Gaussian Process regression
Free space detection
Autonomous vehicles
Assistive robots
Issue Date: 2018
Publisher: MDPI © The Authors
Citation: DE SILVA, V., ROCHE, J. and KONDOZ, A., 2018. Robust fusion of LiDAR and wide-angle camera data for autonomous mobile robots. Sensors, 18 (8), 2730.
Abstract: Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor for free space detection. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression-based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a uncertainty aware free space detection algorithm.
Description: This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Sponsor: Authors would like to thank Loughborough University for providing seed funding for the research carried out and for the doctoral studentship of Jamie Roche.
Version: Published
DOI: 10.3390/s18082730
URI: https://dspace.lboro.ac.uk/2134/35651
Publisher Link: https://doi.org/10.3390/s18082730
ISSN: 1424-8220
Appears in Collections:Published Articles (Loughborough University London)

Files associated with this item:

File Description SizeFormat
sensors-18-02730.pdfPublished version7.36 MBAdobe PDFView/Open


SFX Query

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.