To elevate robotics research to the next level, robots need to be able to dynamically and intelligently learn from the environment around them and the tasks that they perform, separately from human intervention or monitoring. Deep reinforcement learning (RL) is such an approach, where a system actively learns from its own mistakes through trial and error. The emphasis here is on optimizing processes and actions from a large input of data to achieve a certain goal.
Deep RL has been used for robotics training with various types of data, such as colored images, depth images, and LiDAR point clouds. However, existing methods are often limited by their reliance on cameras and LiDAR devices, which can degrade sensing under adverse conditions, such as smoke or fog. Team NCTU for the DARPA Subterranean Challenge (an annual competition to seek out novel underground navigation approaches) used Husky UGV to develop a single-chip millimeter-wave (mmWave) radar, which can penetrate smoke to sense obstacles from the environment, to develop a better and more reliable learning-based autonomous navigation.
Team NCTU is a joint collaboration between National Chiao Tung University (NCTU), National Yang Ming Chiao Tung University (NYCU), and National Tsing Hua University (NTHU), Taiwan, composed of graduate and undergraduate students from robotics departments and institutes. Their organization is dedicated to developing cutting-edge intelligent systems that solve challenges encountered in subterranean environments, where robots must deal with rough terrain, degraded sensing abilities, and severe communication limitations.
Husky UGV navigating the underground circuit
To compete at the SubT Challenge, TEAM NCTU implemented a heterogeneous team of unmanned ground vehicles (including Husky UGV) and blimp robots designed specifically to navigate unknown subterranean environments for search and rescue missions. The ground vehicles were equipped with a range of sensors for accurate perception, localization, and mapping. The blimps, on the hand hand, featured a long flight duration and collision tolerance when traversing uneven terrain.
The design of the system was meant to satisfy the requirements of the DARPA Subterranean Challenge in terms of perception capability and autonomy. To facilitate navigation through smoke-filled spaces in the competition, they employed their novel millimeter-wave radar to enable cross-modal representations for integration via deep reinforcement learning. The autonomy of the proposed scheme was augmented using simulations to train deep neural networks, thereby allowing the system to perform sequential decision-making for collision avoidance and navigation toward a specific goal.
To compete in such an intense challenge, collaboration was needed. So, for the DARPA SubT Challenge, NCTU focused on developing the aspects of autonomy, perception, and mobility for their robotic system, whereas their industrial partners provided the communication solutions.
“The Husky and Jackal robots have been used by many robotics labs over the years and we appreciated this great reliability to benchmark our research results.”
– Team Lead, Dr. Hsueh-Cheng Wang
Getting Competition Ready
The competition environments presented in the SubT Challenge mirror many problems regularly encountered in mining and underground rescue operations. These settings are often obscured by airborne particulates or gases, debris, and other such obstacles that would hinder autonomous navigation. That is why Team NCTU decided to develop a solution using mmWave radar signals as these signals are capable of penetrating adverse conditions like smoke and fog.
Husky and Jackal UGV competing in the SubT Challenge
However, even though these signals are meant to be impervious to environmental conditions, the collected data can still often be noisy or sparse or suffer from multipath reflection problems. The team’s initial idea to solve these problems was to use a cross-modal generative reconstruction method to reconstruct the signals into a LiDAR like range data. This was based on previous work that had shown promise by reconstructing the occupancy map built using mmWave radar signals.
But, the team realized there was still a chance to mistakenly render the obstacles. Therefore, their final proposed solution was the cross-modal contrastive learning of representations (CM-CLR) method that would train an encoder to maximize the agreement between mmWave radar data representations and LiDAR data representations in the training stage. At the deploy stage, the encoder would generate the representation from mmWave signals and then feed that to a pre-trained RL policy for navigation.
Team NCTU’s proposed solution
In order to develop their solution, Team NCTU needed to collect the appropriate data and then perform rigorous testing to validate their algorithms. This is where Husky UGV came in. The team built a radar sensing module that was mounted on top of the Husky with four mmWave radars to create a 240 degrees front- and side-facing field of view. In their tests, the team drove Husky UGV to various places of scene geometries, including indoor corridors, parking areas, and outdoor passages to collect different data sets. In total, they collected 110,381 frames of data (approximately 6 robot hours).
At the testing stage, they used their own developed method to autonomously navigate Husky UGV through a maze in an underground basement as well as in an indoor corridor. Spread out in this maze at three different locations were high concentrations of smoke. Therefore, the team was able to replicate real-world environments where Husky UGV could practice traversing through the smoke, ultimately proving that an approach using just the LiDAR would fail, as Husky UGV would get trapped.
Using TI-6843 mmWave single-chip radar modules, the team’s hardware was able to operate at 60 Ghz with a frequency-modulated continuous wave. The radar module generated range measurements and point clouds that could penetrate the smoke to sense the obstacles, but much of the raw signal turned out noisy or sparse. The initial LiDAR approach used a Velodyne Puck to provide accurate range ground truth at the training stage, but as mentioned above, the LiDAR was used during the training stage of Deep Reinforcement Learning. The deployed robot only used the mmWave Radio sensors with the trained Deep Reinforcement Learning algorithm. But in order to appropriately explore the functionality and feasibility of such an algorithm, Team NCTU needed the right platform for testing.
Husky UGV Expedites Algorithm Testing
Ultimately, the team realized that building a robust robot platform from the ground up that could run in real-world environments would be very challenging. While the team has experience building their own educational robots, they knew that an ambitious competition like the DARPA SubT challenge would require a truly concerted effort with their time and resources properly applied. Without their Clearpath platform fleet (they own two Husky UGVs and two Jackal UGVs), they would have lost precious time both developing their own systems, as well as integrating their developed sensor modules and software. Instead, with Clearpath’s out-of-the-box ready solutions, integration was extremely smooth.
Most importantly, the team found Husky UGV to be stable and reliable, which was especially important as the robot would drive autonomously onto bumpy terrain in the SubT Challenge Urban Circuit. Other noted highlights included Husky UGV’s reliable wheel odometry, battery life, load capacity, ease of operation, and ROS coding API( Robot Operating System). As the team lead, Dr. Hsueh-Cheng Wang stated: “The Husky and Jackal robots have been used by many robotics labs over the years and we appreciated this great reliability to benchmark our research results.”
Generative reconstruction plotted on a cartesian coordinate system
Improving the Future of Underground Navigation
The future of Team NCTU’s project looks bright. Through their ambitious work, they were able to show that the proposed end-to-end deep RL policy with contrastive learning successfully allows a robot to navigate through smoke-filled maze environments and demonstrated superior performance over generative reconstruction that sometimes produced noise artifact walls or obstacles.
They plan to test several of their new algorithms and neural network architectures for both generative reconstruction and cross-modal representation learning. As well, they want to enhance the deep RL policy further to also perform goal navigation. Once successful, the team then plans to commercialize the developed software and hardware for real-world applications such as tunnel inspection and nuclear decommissioning. Finally, Team NCTU also plans to share their work at ICRA 2021. You can read their published IEEE article here.
The team also hopes to see some fruitful industry partnerships, as, during the COVID-19 outbreak, the demands for sanitizing or inspection robots have increased. Their proposed method has sought to catch attention by looking to address two core challenges:
- Solve the issue that LiDAR signals were blocked by alcohol spray
- Replace LiDAR with low-cost sensors
Team NCTU is composed of Jui-Te Huang, Chen-Lung Lu, Po-Kai Chang, Ching-I Huang, Chao-Chun Hsu, Zu Lin Ewe, Po-Jui Huang, and Dr. Hsueh-Cheng Wang.
To learn more about Team NCTU, you can visit their Github page here.
To learn more about Husky, you can visit our website here.