Towards Autonomous Navigation for Robots with 3D Sensors
- This thesis contributes to mobile robot autonomous navigation at different levels. Different methods for 3D sensing are discussed. In particular, we evaluate a time-of-flight range camera. We develop a method to detect and heuristically correct the most inhibitive error source, wrap-around of range measurements.
We show several 3D point cloud datasets gathered with the 3D sensors presented in the preceding chapter. The datasets represent a wide variety of real-world and set-up scenes, indoors and outdoors, underwater and land, and small scale and large scale.
We present a robust and fast short-range obstacle detection algorithm. It is based on the Hough transform for planes in 3D point clouds. The method allows a mobile robot to reliably detect the drivability of the terrain it faces. Experiments with two types of sensors on data from indoors and outdoors demonstrated the algorithm's performance. The processing time typically lies between 5 and 50 ms per frame.
We develop the Patch Map data-structure for memory efficient 3D mapping based on planar surfaces extracted from 3D point cloud data. We survey and benchmark different methods of generating planar patches from a point cloud segmented into planar regions. We benchmark various collision detection methods with various roadmap algorithms on synthetic data to find out the most efficient ones.
We thoroughly test the Patch Map data-structure developed in the preceding chapter on real-world data. We perform both roadmap generation with PRM and RRT and way finding from start to goal, based on RRT. We compare our approach to the established methods of trimeshes and point clouds and find that it performs an order of magnitude faster on 3D LRF data and also considerably better on sonar data. We also show that this speed advantage does not come at the cost of loss of precision.