In the previous article VT&R Part 1, we introduced Visual Teach and Repeat Technology and  gave a insight into software highlights and its ease of use. In part 2, we will be looking into what makes VT&R different, potential applications and robot compatibility.

Differentiating against other applications

VT&R is a unique, dynamic software, since it only requires a stereo camera to autonomously navigate the robot. The multi-channel localization makes it resilient to various environmental changes such illumination changes and difficult weather conditions. VT&R also benefits from a multi-experience localization features which consist of associating live image stream with multiple previously recorded maps. This means that the path tracking capability of the robot improves as it travels more and more and collects additional features from the environment. Furthermore, this feature robustifies localization capability of VT&R in dynamic environments. For instance if you teach your robot to follow a path in a parking lot today, VT&R will be able to navigate your robot in the following days as well despite possible changes in the environment such as different parked cars, etc.

In the above image, the robot has traversed through the shown route a couple of times. Each time, it has detected a few new keypoints in the environment (shown with different colors). VT&R uses these keypoints for localizing the robot against the map. Collected keypoints during the teach phase are shown in green. All keypoints in colors different than green have been collected during the repeat phase of the robot. While the green keypoints form the main source of localization for VT&R, other points are used to robustify the localization against various changes in the environment of the robot.

In general, VT&R can perform accurately in any environment as long as:

  1. There are enough features and landmarks in the environment – the system does not work in featureless environments like grounds covered with snow.
  2. There are enough fixed landmarks in the environment that the robot can use for navigation like trees, buildings, etc. In other words, the performance of the system decreases in highly dynamic environments like a crowded indoor spaces.

VT&R does not rely on any source of information outside the robot for navigating the vehicle. This means that unlike navigation packages based on GPS, VT&R can drive the robot via paths in dense forests, deep mines or inside factory floors and buildings. VTR can repeat every single maneuver that the user performed during the teach phase, including passing through extremely narrow hallways and turning in tight corners with accuracy up to a few centimeters. This feature is something unique to VT&R software and is almost impossible to achieve with any other combination of sensors. However, VT&R is only able to navigate the robot through paths that it has seen (thought) before.

Applications

You might be wondering about the various software applications. A simple and useful application could direct the robot to travel between locations and execute some function (for example, taking pictures, obtaining soil samples, etc). With the web based GUI of VT&R, users can create a series of missions for the robot without physical access, i.e., go to point A, idle in point A for x minutes, then go to point B, idle in B for another y minutes, etc. Then during that idling time, the robot may be commanded to execute a desired task. Some interesting private and public applications include using VT&R for large areas of land such as orchards, mines and airports. The ability of the VT&R software to work in GPS-denied environment makes it also attractive for military usage.

Equipment used for VT&R?

For the best performance of VT&R, a Bumblebee XB3 camera is recommended.  VT&R is also compatible with Bumblebee 2, however it is not recommended due to its short baseline compared to XB3. This is especially true for outdoor environments where wider baseline means higher depth accuracy. Support for other USB cameras is currently an ongoing project at Clearpath and could be possible for future versions of the software. VT&R also requires a GPU enabled computer for fast and accurate processing of the stereo image stream.

Compatible Robots

VT&R software is compatible with any ground robot/vehicle with drive-by-wire capability, covering all ROS-based systems including Clearpath platforms (like the Husky and Warthog UGV).

Similar to a human driver, VT&R processes the input image stream from the stereo camera and publishes a dynamically feasible speed and steering command to the under-control vehicle.

VT&R communicates precisely with the underlying platform only through the kinematic controller inputs of the system. In Clearpath platforms (and in general in ROS-based systems), this corresponds to the /cmd_vel topic which has geometry_msgs/twist type and contains linear and angular velocity commands for the robot.

Note that the steering and speed commands published by VT&R should be dynamically feasible for the robot. For instance, it is not possible to command the robot to accelerate more than its highest acceleration rate or turn tighter than its turning radius. This is something that is taken care of by the advanced path tracking methods of VT&R. The path tracker of VT&R can handle various physical and dynamical properties of the robot and its actuators (such as max/min linear/angular speed/accelerations, turn on spot, etc) in the form of constraints.

 

This is the final part of the VT&R blog. To learn more about the Visual Teach & Repeat package, click here. Don’t forget to subscribe to the blog to stay up to date on all things robots!

Clearpath Robotics Announces PartnerBot Grant Program   Apply Today
Hello. Add your message here.