When discussing the future of robotics and artificial intelligence, the conversation often focuses around anxieties of losing the human touch. Will robots eventually make many human tasks and responsibilities superfluous? Others who are more optimistic about robotic integration in the future, however, believe that there is much untapped potential in the collaboration between robots and humans.

Integration of a collaborative robotic arm on Ridegeback

Lending a Helping Hand

The Space & Terrestrial Autonomous Robotic Systems (STARS) Laboratory at the University of Toronto’s Institute for Aerospace Studies is one such team that is dedicated to bringing robots out of the laboratory and into the real world to assist individuals with their daily lives. Mobile manipulators can help alleviate tasks that are normally too dangerous, repetitive, dull or even inaccessible for humans. In an ambitious project which combines our Ridgeback platform with a collaborative robotic arm, the STARS Lab team is exploring various applications of mobile manipulation tasks in human environments.

As with many academic teams, the STARS Lab is made up of instructors and students, headed by Professor Jonathan Kelly who is aided in his research by several students including Trevor Ablett, Abhinav Grover, Oliver Limoyo, Filip Marić, and Philippe Nadeau. Together, this team is developing state-of-the-art machine learning techniques to improve the capabilities of traditional robotic systems (static robot manipulators), allowing robots to complete challenging mobile manipulation tasks. To do so, they are investigating various combinations of model-free, model-based, imitation, and reinforcement learning.

The biggest challenge with such an ambitious approach, however, is that human beings are incredibly intelligent, perceptive, and highly dexterous. Tasks that might seem basic for humans are actually quite difficult for even state of the art robots to accomplish. In fact, researchers are still not 100% sure how exactly humans are able to effectively carry out such a wide range of tasks. Thus, the STARS Lab team actively explores how to enable machines to, one day, become as efficient and versatile as people.

“Clearpath has previously provided many integrated systems to other laboratories at the University of Toronto and these machines have been used successfully for a variety of research projects.”
– Dr. Johnathan Kelly, Head of the STARS Lab

Project working on object insertion

Breaking Down the System

In their project, the omnidirectional Ridgeback functioned as the base for a dexterous person-safe/collaborative robotic arm (such as the UR10), a dexterous gripper (Robotiq 3-finger gripper), and a force-torque sensor (Robotiq FT sensor), along with Clearpath’s pre-installed ROS software. With an out-of-the-box ready to go system, the team was able to focus specifically on the programming side of the problem without having to worry about developing a robust hardware system from scratch. Without Ridgeback, they would have had to buy and integrate each robotic component separately or alternatively have to design their own platform from the ground up with bare electronic components. In their research, they found that such an option was not feasible or cost-effective. 

Another great concern for the STARS Lab teams was the time sink. With their main goals functioning on the programming side, they knew they wouldn’t want to risk encountering issues from their own initial design, engineering, and build processes. In other words, they needed something reliable. As the head of STARS Lab, Jonathan Kelly, stated: “Clearpath has previously provided many integrated systems to other laboratories at the University of Toronto and these machines have been used successfully for a variety of research projects.” Thus, due to Ridgeback’s robust build quality, little hardware maintenance, and Clearpath’s extensive technical support, the team was able to focus on their own research.

Demonstrating the mobile manipulator to a summer camp

Challenging the Human Competition

Let’s dig into some of the specifications of their system testing. The STARS team uses a model-free reinforcement learning policy (Machine Learning) to generate end-effector velocity commands on the real Ridgeback hardware based on measurements from easily-acquired sensor data (from camera RGB and depth images, end-effector position, gripper position, force-torque values). This allows the team to see the results of their algorithm training on real hardware, not just in simulation. In another experiment, they developed a forward predictive model that is able to predict future images (in terms of appearance) given a dataset of {image, action, next image} tuples. 

To achieve much of their testing, STARS Lab focused on sensors that did not require additional setup and also allowed for more human-like interaction with objects. As they found, visual sensing alone is not adequate for many tasks (e.g., for tight-tolerance insertion). That is why, in the end-to-end learning approach, for example, they relied on the use of arm joint encoders, a camera attached to the sensor mast, and the force-torque sensor, with their payload as the gripper and any task-relevant objects.

Ridgeback arrives at STARS Lab

Through their research, the team has been successful in establishing that their imitation learning framework (i.e., actively working to replicate processes based on data gathered) as applied to a human teleoperating a robot using an off-the-shelf VR controller. Furthermore, as the concept was proven on their Ridgeback project, they also believe that this application can be extended to a variety of different mobile manipulation settings and scenarios. This opens up the possibility of a variety of different applications and ways to challenge current mobile manipulators and robotic teleoperation. This has led them to begin establishing core industry partnerships to leverage their findings. They’ve also recently published their findings at the IEEE/RSL International Conference on Intelligent Robots and Systems (IROS), demonstrating that their model-based predictive model can be combined with a Kalman filter to improve performance with noisy visual data. The team is constantly looking to improve their own theories and develop new and exciting ways to push robotics beyond its limits. 

To learn more about the work that the STARS Lab is doing, you can visit their website.

To learn more about our Ridgeback platform and how it can elevate your next project, you can learn more on the Clearpath website.

Rockwell Automation completes acquisition of autonomous robotics leader Clearpath Robotics and its industrial offering OTTO Motors. Learn more | Read More
Hello. Add your message here.