How can we, the robotics community, continue to drive innovation? Well, one of the solutions is to create robots and platforms that intelligently adapt to the data they collect, environments they are deployed in, and processes that they fulfill. By learning each step of the way, robots should be able to autonomously evolve their own functions and algorithms perhaps even better and faster than a human could

Adapting on the Fly

A team from the Autonomous Systems Lab at the public research university ETH Zurich in Switzerland are dedicated to creating robots and intelligent systems that are able to do just that; autonomously operate in complex and diverse environments. Using a Ridgeback as a base and equipped with a Franka Emika Panda manipulator arm (lovingly called RoyalPanda, i.e. Ridgeback + Panda), the team is experimenting with a series of different scenarios to develop best practices of how a robot could “learn”. They focus on mechatronic design and control of systems that will autonomously adapt to different situations and cope with uncertain and dynamic daily environments. They are most interested in developing robot concepts from real-world testing whether that be on land, air, or water. In their pursuit of adding intelligence to autonomous navigation, they work on novel methods and tools such as for perception, abstraction, mapping, and path planning.

 

The research team is headed by a group of experienced researchers, engineers, and professors. The goal of their project is to explore the possibilities and limitations of using Reinforcement Learning to train a neural network in simulation which controls all degrees of freedom of a mobile manipulator (i.e. joints of the manipulator and movement of the base platform) and deploying it on a real robot. In this aim, they began with a robot that was constrained to move in one plane and directly fed data from two 2D LiDAR scans to the network to give the controlling agent an understanding of its environment.

Simplifying the Simulation Process

Their pursuit of whole-body trajectory planning and control, however, did not come without its challenges. The approach itself was very complex since sampling-based methods, for example, can have a multitude of collision-free joint configurations. Furthermore, these methods suffer from the curse of dimensionality or the need to know the full environment at the planning stage. Another approach is Model Predictive Control which solves an optimal control problem in a recending-horizon fashion. 

Currently, the core flaws with such methods are that they are either restricted in their obstacle avoidance capabilities which causes them to get stuck in local minimas (due to their limited horizon), or they only use a kinematic model of the robot. With the team’s own newly designed approach though, they laid the foundation of a controller which overcomes these problems while keeping the computational demand very low, therefore being capable of running in real-time on low-end devices.

RoyalPanda from above

“The Clearpath Ridgeback platform was very handy from the beginning (i.e. creation of a simulation) to the end (deployment on the real system) thanks to its out-of-the-box ROS compatibility. That way, we were able to focus on our work instead of having to develop our own drivers and models for ROS.”
– Julien Kindle

To enable their project, the Autonomous Systems Lab utilized both our Ridgeback mobile base platform as well as leveraged the URDF files that we offer on our Gazebo page. Combined, this made validation in simulation easy and facilitated a quick deployment as Ridgeback is ROS-compatible out-of-the-box. 

Another important Ridgeback feature that was core to the team’s approach was the platform’s omnidirectional drive setup that is able to instantaneously move in any direction. With that, they were able to find a set of hyperparameters which lead to convergence during training and resulted in an agent that was then deployed on Ridgeback. In addition, the team was able to continue training while slowly reducing the maximum velocity in the y-direction to zero, resulting in an agent that could be deployed on a differential drive platform. Ridgeback was then further used to simulate such a platform by turning off motion in the y-direction. If you are interested in such a configuration then you can follow our tutorials here

Some of the operations that RoyalPanda was involved in included driving to a given setpoint with an accuracy of around 4cm (and holding its position actively). As well, it performed grasping exercises where the goal was to detect when the setpoint is reached, close the grippers, and set a new setpoint. On a static base, they also tested grasping processes but this time with Reinforcement Learning approaches (you can read that paper here).

Ridgeback Enables Quick and Effective Testing

The research team knew that if they wanted to aggressively test for real world applications, then they’d have to move beyond the simulation environment. Thus, Ridgeback provided them with a platform for testing and a non-differential drive solution. The out-of-the-box compatibility of Ridgeback with ROS was notably beneficial for them here. 

One of the key researchers on the project, Julien Kindle, believed Ridgeback was crucial to the completion of their work: “The Clearpath Ridgeback platform was very handy from the beginning (i.e. creation of a simulation) to the end (deployment on the real system) thanks to its out-of-the-box ROS compatibility. That way, we were able to focus on our work instead of having to develop our own drivers and models for ROS.” Furthermore, the team greatly benefited from how quickly the platform was up and running, the available URDF files to describe the robot, its kinematics, and dynamics, as well as simply the smooth and steady driving that Ridgeback provides.  Insert simulation link here

Ridgeback was thus used in a three-fold technical approach to validate the team’s research:

  1. Deployment of their neural network.
  2. Modelling in a PyBullet simulation by using our provided URDF file.
  3. Validation of the agent in Gazebo through Ridgeback’s ROS-Gazebo simulation package.

 

Trajectory tracking using RRTConnect in MoveIt

Assembling the Project

But, Ridgeback was just the base platform of this ambitious product. The team equipped their robot with Hokuyo LiDARs and mounted a Franka Emika Panda robot arm to it. They chose Franka Emika as they offer a stable ROS-driver package for its robot and therefore it was very easy to combine it with Ridgeback. Next, a Visual-Inertia Sensor was used for localization (to calculate the setpoint of the end-effector). Finally, they hooked up a remote safety stop to the safety-stop circuit of Ridgeback and the Panda in order to protect the robot and humans around it. 

While Clearpath originally designed the team’s Ridgeback to be able to handle two-arm upper-torso manipulators, the ETH Zurich team then stepped in to further adapt it for the Panda. To begin, they designed their own mounting platform to be added to the Ridgeback platform and attached the arm to it. Additionally, they used its power supply on the base (the large black box) and directly attached it to the optional 24V to 230V inverter inside Ridgeback. From a coding perspective, it was easy to combine both URDF files of Ridgeback and Panda (which also comes with a ROS driver). The only tricky thing here was that when they implemented their code (you can find their RL training here), they had to manually add the inertia terms and gazebo elements in the URDF (taken from this repository). 

RoyalPanda saying “Hello”

And their research was a resounding success! The team was able to deploy a Reinforcement Learning agent in the form of a neural network that was trained in simulation to a real robot in a variety of corridor environments. By using Automatic Domain Randomization, they were able to slowly increase the complexity of the simulation to improve the speed and robustness of convergence and give the agent a better understanding of a real-world scenario. A paper based on their findings has been submitted both to IROS and RA-L and they hope to see their work published soon. However, the paper is also currently available online here.

The future of this project is as of yet unclear, however, their created platform, RoyalPanda will be also used in other future projects. They have already used a similar combination named RoyalYumi, an adapted version that features Ridgeback with an ABB YuMi, in other projects. An example includes fetch and carry applications in unstructured indoor environments. You can read that paper here.

The research team included Julien Kindle, Dr. Fadri Furrer, Dr. Tonci Novkovic, Dr. Jen Jen Chung, Prof. Dr. Roland Siegwart, and Dr. Juan Nieto.

To learn more about Autonomous Systems Lab, visit their website here.

To learn more about our omnidirectional indoor mobile platform Ridgeback, visit our website.