Author archive - tom.larkworthy

About the Author


Postdoc opening in Machine Learning for Robotics

The Department of Advanced Robotics at the Italian Institute of Technology (an English-language research institute) is seeking to appoint a well-motivated full-time postdoctoral researcher in the area of machine learning applied to robotics in general, and in particular to Autonomous Underwater Vehicles (AUV).

The successful candidate will join an ongoing research project funded by the European Commission under FP7 in the category Cognitive Systems and Robotics called “PANDORA” (Persistent Autonomy through learNing, aDaptation, Observation and ReplAnning) which started in January 2012. The project is a collaboration of five leading universities and institutes in Europe: Heriot Watt University (UK), Italian Institute of Technology (Italy), University of Girona (Spain), King’s College London (UK), and National Technical University of Athens (Greece). Details about the project can be found at:

The accepted candidate will contribute to the development and experimental validation of novel reinforcement learning and imitation learning algorithms for robot control, as well as their specific application to autonomous underwater vehicles. The research will be conducted at the Department of Advanced Robotics within the “Learning and Interaction Group” with project leader Dr. Petar Kormushev.

The research work will include conducting experiments with two different AUVs (Girona 500 and Nessie V) in water tanks in Spain and UK in collaboration with the other project partners. The developed machine learning algorithms can also be applied to other robots available at IIT, such as the compliant humanoid robot COMAN, the hydraulic quadruped robot HyQ, the humanoid robot iCub, two Barrett WAM manipulator arms, and a KUKA LWR arm robot.

Application Requirements:

  • PhD degree in Computer Science, Mathematics or Engineering
  • Excellent publication record
  • Strong competencies in: machine learning, reinforcement learning, imitation learning
  • Good programming skills, preferably in MATLAB and C/C++
  • Experience in robot control and ROS is a plus

International applications are encouraged. The successful candidate will be offered a fixed-term project collaboration contract for the remaining duration of the project due to end in December 2014 with a highly-competitive salary which will be commensurate with qualifications and experience. Expected starting date is as soon as possible, preferably before September 1st, 2013.

Application Procedure:

To apply please send a detailed CV, a list of publications, a statement of research interests and plans, degree certificates, grade of transcripts, the names of at least two referees, and other supporting materials such as reference letters to: Dr. Petar Kormushev (petar.kormushev(at), quoting [PANDORA-PostDoc] in the email subject. For consideration, please apply by June 21th, 2013.

For latest updates please check here.

Robust Control and Wall Detection Experiment on Nessie VI

Pandora partners from NTUA visited HW Januarty 2013 to try their robust model based control system on Nessie VI. In addition to testing and tuning the algorithm which provides 5DOF waypoint control to the vehicle, HW took the opportunity to test newly developed sonar analysis software. The overall trial objective was to integrate independently developed systems into a coherent whole. The new sonar wall pose estimator was connected to the new NTUA control system to create a wall following behaviour.

HW has a large 20x20x7m wave tank which was used for the experiment. This facility has the nice capability of generating waves. We had hoped these waves could test the robustness of the controller under current disturbances. However, the waves energy is largely on the surface of the tank, so it is questionable as to whether this was a good test or not. Nevertheless, control and integration testing was a success, with both the NTUA and HW team managing to achieve all their goals for the 5 day period of wave tank testing.

You can see the final day of results for yourself in the video. The AUV is extremely steady when performing a wall following routine. The AUV is also able to perform rapid movements between set waypoints. In the video some ringing is observed, particularly when the AUV is commanded to do extreme pitches. Pitch is the most unstable DOF on the AUV. We hope with further tuning we might improve the control, but the aim of the experiment was about getting the software working talking within the larger system correctly rather than absolute performance.

Feature Based Localisation and Mapping

Last week Tom Larkworthy of Heriot-Watt University (HWU) visited the University of Girona (UdG) to initiate integration of recent SLAM research onto HWU’s Nessie AUV. At UdG, Sharad Nagappa has been focusing on development of SLAM using recent advances in the field of multi-object estimation.

What is SLAM?

Simultaneous Localisation and Mapping (SLAM) is a way of improving estimates of vehicle position in unknown environments. We estimate the position of landmarks based on the current vehicle position, and we can then use knowledge of these (stationary) landmarks to infer the position of the vehicle. By relying on a fixed reference, we can reduce the error due to drift.

PHD Filter and SLAM

The Probability Hypothesis Density (PHD) filter is a suboptimal Bayes filter used for multi-object estimation. Here, we are required to estimate the number and position of an unknown number of objects. The PHD filter performs this while eliminating the need for data association. We can combine this with SLAM by using the PHD filter to represent the landmarks. More technically, this forms a single cluster process, with the vehicle position as the parent state, and the landmarks as the daughter states conditioned on a vehicle position. This formulation is a form of feature-based SLAM since we approximate landmarks as point features.

Figure: Simulation of SLAM with a combination of detected and undetected landmarks

Detecting and Estimating Map Features

The PHD SLAM formulation only relies on a set of point observations. The algorithm does not change depending on whether we are using sonar or vision. Consequently, this offers the potential to combine these two sources using a single technique – as long as we can detect useful features from the sensors! Currently, we are relying on image feature extractors such as SURF and ORB to detect features from our stereo camera. In the coming months we will consider features from the forward looking sonar as well as apply PHD SLAM to real data.

Challenges For PANDORA

Computational resources are particularly constrained on AUVs. SLAM algorithms are notoriously computing intensive. One option available land robotics is the use of CUDA computing architectures to brute force around the problem, but for the underwater domain there are no suitable embedded CUDA systems. Therefore one big challenge for integration in PANDORA is adapting cutting edge SLAM algorithms to run on our embedded systems.
Another difficulty associated with the underwater domain is combining SLAM with sonar data. Standard forward looking sonars are unable to accurately localise in the depth dimension, thus observations are underconstrained. Furthermore, sonar pings do not have reliable high frequency components that optical vision does – this means that common feature extractors, such as SIFT, do not see anything on sonar data. In PANDORA we will be using next generation sonars to get better sonar data into the SLAM system, and developing new feature detectors that compliment SLAM in an underwater domain better.

Learning algorithms for improved AUV control

The IIT team carried out a series of experiments on adaptive control. The aim of this work was to wrap a given controller into a learning layer, able to make corrections to the controller’s output and adapt it to the environment. Even robust controllers, as the ones being developed in this project by NTUA, may have to face unexpected environmental conditions. Such events can make the controllers less effective or unable to perform the task in the most severe cases. The learning layer monitors the agent’s performance, and explores possible corrections should the performance become lower than expected.

The most important component of the learning layer is a learning algorithm. The learning algorithm search the parameter space in order to find a parameter vector that maximizes the agent’s performance. The parameters come from a class of parametric functions selected to represent the corrections to the control actions. The output of the parametric function is added to the given controller as correction. We devised a learning algorithm to perform iterative optimization of the parameters based on a series of experiments in the UWSim simulator of Girona500.

We simulated a very strong current, stronger than the thrusters. The controller we used was a simple PD controller. The task was to reach an object on the seabed and hover at less than 0.5 m above it for as long as possible, to take pictures. Since such a controller reacts to the current when the vehicle is already being carried away, it cannot by itself perform the task. The agent attempts to reach the target object in several trials, going out of the current to the initial location. It managed to correct the given controller by navigating out of the current as long as possible, and trying to resist the current at full power in the last part of the trajectory.

Dual Head P900X BlueView Sonar data

We wanted to get a feel for the data produced by the BlueView P900 sonar, so last week we ran experiments in our underwater tank. This both will inform us to whether the BlueView technology is a good match for the inspection task problem, and will also provide a ground truth sonar dataset for preliminary sonar PDF SLAM work.
At Heriot-Watt we have a 3x4x2m water tank integrated with a Cartesian robot. This allows us to precisely (1mm) and autonomous position the sonar head and gather data. We also have a pan and tilt unit that gives us a total of 5 degrees-of-freedom with which to position the sonar head. It should be noted that the pan-and-tilt head has to be manually positioned, which means it is not as precisely controlled as the XYZ dims. Furthermore the water tank is not particularly big, which means the data is corrupted heavily by multi-path reflections and near field anomalies, but the advantageous of knowing exactly where the head is in the data gathering procedure outweighs these drawbacks (especially if you are trying to train a SLAM system).
We ran the experiments by moving the head in a lawnmower pattern in the X-Y plane three times for each Z level (0, 5cm, 10cm). A data collection “run” consisted of two data streams for each head of the sonar (mounted horizontally and vertically), for a fixed setting of the pan and tilt unit. We performed 7 runs in total for different pan and tilt settings. The video included here is of a single run with the pan set to -10 degrees (sensor nearly facing straight ahead), and the tilt at 45 degrees (sensor facing forwards and to the floor tank). In order to produce the Youtube video, several video codecs were used sequentially so the quality has degraded somewhat, furthermore I was unable to sync the horizontal and vertical videos to be in time (they start together but drift). The video, however, does give a flavour of the type of data coming from the sensor, but for any serious work you should use the raw data provided at (5Gb):-

git clone git://

Raw data is saved in BlueView’s propriety dataformat “.son”. You can use the free viewer downloadable from BlueView’s website, ProViewer 3.5 to view these files. Windows only I am afraid (very annoying). That software can export the .son data to a few different video formats, although I had difficult getting a lot of programs to read the exported data. More technical info about the experiments is in the repository.

Tom Larkworthy

Pandora in The Engineer

Pandora in The Engineer

The Engineer is a UK trade magazine for engineering and technology. They recently published an article featuring the Pandora project. Its interesting to see which details of a project sources close to industry are most excited about. In this case, the article focussed on fault tolerance. This aligns well with our high level goals in Pandora, and what we believe to be important for AUV adoption in industry. So this article confirms that we are on the right path for producing a technology that will have real value to the world.



Scientists from a leading university in Scotland are heading up a pan-European project to create the most advanced, autonomous, cognitive robot which could help to dramatically reduce the cost of underwater monitoring operations and maintenance within the oil and gas industry.
The team from the Ocean Systems Laboratory (OSL) at Heriot-Watt University is designing a new approach to computational cognition for use in Autonomous Underwater Vehicles (AUVs), with the aim of significantly improving the inspection, repair and maintenance reliability of vehicles used for underwater monitoring.
The system, named PANDORA: Persistent Autonomy through learNing, aDaptation, Observation and Re-plAnning will be trialed on three different AUVs in Scotland and Spain covering in-lab and deep water conditions, to test the system on vehicles and monitor their ability to operate in ‘real’ environments, overcome challenges, accommodate hardware failures and seek alternative missions when idle.
Following the Deep Water Horizon crisis in 2010, calls were made to dramatically improve the type and intelligence of vehicles used for underwater inspection and intervention, to reduce the opportunity for events of that scale to reoccur.
The European Commission issued a call for ideas on ways to increase the thinking capacity of robots supported by the provision of significant funding for viable projects. Identifying the opportunity to put their expertise to the test, the team of scientists, headed up by Professor David Lane, Founder of SeeByte, a Heriot-Watt spin out company, applied to the Commission with a comprehensive, three year research plan to create and develop Pandora for global commercial use. Their plan was the highest praised out of all responses and the team was duly awarded €2.8M (£2.3M).
Professor Lane explained the background to the project: “The issue with autonomous robots is that they are not very good at being autonomous. They often get stuck or ask for help and generally only succeed in familiar environments, when carrying out simple tasks. Over the next three years, our challenge is to develop a computational prpgramme which will enable robots to recognise failure and have the intelligence to respond to it.
“We will develop and evaluate new computational methods to make human-built robots persistently autonomous, significantly reducing the frequency of assistance requests. This is an exceptionally exciting time for us and we are delighted with the response we had from the European Commission, which has allowed us to progress with our research.”
Libor Král, Head of Unit, Cognitive Systems and Robotics, DG Information Society and Media, European Commission, said: “PANDORA is a particularly exciting robotics project undertaken by top European experts.
“The researchers have identified a real issue in an underwater environment where cutting-edge technology can help solve challenging problems. The European Commission is delighted to be supporting this latest addition to its portfolio of over 100 projects, within the EU’s research seventh framework programme.”
Three core themes will be explored over the course of the project, working synergistically together, that are tailored for the underwater environment. These are:

  • Structure inspection using sonar and video
  • Cleaning marine growth using water jets and
  • Finding, grasping and turning valves
  • The project is being run in partnership with five universities across Europe – Instituto Italiano di Tecnologia, University of Girona, King’s College London and National Technical University of Athens, with steering committee members from BP and SubSea7.
    More information can be found at

    For further information, please contact Lally Cowell, Associate Communications Director at MMM on T: 0141 221 9041 / E:

    Ocean System Lab’s Cartesian Robot

    This is one of our local testing assets at Heriot-Watt. It is a 3m by 2m pool, 2m deep, with a Cartesian robot suspended above. We will be using this asset to generate some training data for our vision/sonar inspection tasks. The robot allows us to precisely position sensors in the water, providing us with positional ground truth for testing localisation algorithms. The pool is clearly not large enough to run missions in, but it is also very useful to have a test tank to move our robot, NESSIE V, about in, before we undertake serious missions away from University.

    We are currently making the robot compatible with ROS, so that the same software that moves the real AUV, can be used to control the Cartesian’s position (although the Cartesian robot is only 3DOF verses the AUV’s 5DOF).