Feature Based Localisation and Mapping

Last week Tom Larkworthy of Heriot-Watt University (HWU) visited the University of Girona (UdG) to initiate integration of recent SLAM research onto HWU’s Nessie AUV. At UdG, Sharad Nagappa has been focusing on development of SLAM using recent advances in the field of multi-object estimation.

What is SLAM?

Simultaneous Localisation and Mapping (SLAM) is a way of improving estimates of vehicle position in unknown environments. We estimate the position of landmarks based on the current vehicle position, and we can then use knowledge of these (stationary) landmarks to infer the position of the vehicle. By relying on a fixed reference, we can reduce the error due to drift.

PHD Filter and SLAM

The Probability Hypothesis Density (PHD) filter is a suboptimal Bayes filter used for multi-object estimation. Here, we are required to estimate the number and position of an unknown number of objects. The PHD filter performs this while eliminating the need for data association. We can combine this with SLAM by using the PHD filter to represent the landmarks. More technically, this forms a single cluster process, with the vehicle position as the parent state, and the landmarks as the daughter states conditioned on a vehicle position. This formulation is a form of feature-based SLAM since we approximate landmarks as point features.

Figure: Simulation of SLAM with a combination of detected and undetected landmarks

Detecting and Estimating Map Features

The PHD SLAM formulation only relies on a set of point observations. The algorithm does not change depending on whether we are using sonar or vision. Consequently, this offers the potential to combine these two sources using a single technique – as long as we can detect useful features from the sensors! Currently, we are relying on image feature extractors such as SURF and ORB to detect features from our stereo camera. In the coming months we will consider features from the forward looking sonar as well as apply PHD SLAM to real data.

Challenges For PANDORA

Computational resources are particularly constrained on AUVs. SLAM algorithms are notoriously computing intensive. One option available land robotics is the use of CUDA computing architectures to brute force around the problem, but for the underwater domain there are no suitable embedded CUDA systems. Therefore one big challenge for integration in PANDORA is adapting cutting edge SLAM algorithms to run on our embedded systems.
Another difficulty associated with the underwater domain is combining SLAM with sonar data. Standard forward looking sonars are unable to accurately localise in the depth dimension, thus observations are underconstrained. Furthermore, sonar pings do not have reliable high frequency components that optical vision does – this means that common feature extractors, such as SIFT, do not see anything on sonar data. In PANDORA we will be using next generation sonars to get better sonar data into the SLAM system, and developing new feature detectors that compliment SLAM in an underwater domain better.




Comments are closed.