Autonomous Drones Compete in Underground Challenge

Posted
18 November 2021

[vc_row][vc_column css_animation=”fadeInLeft” css=”.vc_custom_1632817714040{padding-right: 5% !important;}”][vc_column_text]This article is an interview with Emesent CTO and Co Founder, Dr Farid Kendoul, originally published in installments on LiDAR News.[/vc_column_text][insignia_section_heading title=”” subtitle=”Emesent’ s Hovermap recently competed in the DARPA Subterranean Challenge and was part of the team that took second place in this prestigious event.” add_icon=””][insignia_section_heading title=”The Challenge Rounds” subtitle=”” add_icon=””][/vc_column][/vc_row][vc_row][vc_column css_animation=”fadeInLeft” css=”.vc_custom_1632817714040{padding-right: 5% !important;}”][vc_column_text css_animation=”fadeIn”]DARPA is always looking to the future to identify what partners within the Department of Defense need in the future. They look at their current technologies and identify gaps. They then stimulate research in these areas. When they noticed that warfare and search and rescue mission locations had transitioned to more underground environments. However, the technology had not yet adapted as situational awareness is mostly based on information from high-altitude drones and satellites.

They created the DARPA SubTerranean (SubT) competition to develop technology to capture rapid situational awareness in challenging GPS-denied environments.

The competition requirements were a fleet of autonomous systems to navigate, search, and map the subterranean environments to identify and geo-reference the location of artifacts like backpacks, cell phones, trapped survivors, and even invisible gas and provide this information to the human operators in a safe location.

The SubT challenge started in 2018. It included seven teams that DARPA had selected and funded, as well as self-funded teams. There were three rounds, through three courses designed to really stress the systems and to test their abilities to access all areas. Teams were eliminated as the rounds progressed.

Tunnel System – Sep 2019 – Pittsburgh
an underground mine environment

Urban Underground – Feb 2020
Urban underground environments, like subway platforms, can have complex layouts with multiple stories and span several city blocks.

Cave Circuit – held remotely due to COVID (Nov 2020)
Was supposed to be held in the US but had to be modified due to COVID. It happened in a decentralized way, with each team creating their own event locally in their own country and sending the results back to DARPA.

We went to the Chillagoe Caves in far north Queensland. This was a turning point for us – we’d never had results like this. We got perfect data from a very challenging environment.

Final – Sep 21 – Kentucky
Included a smaller number of entries as it acted as previous rounds were the heats before the final. The course included tunnels, urban and natural cave environments.[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column css_animation=”fadeInLeft” css=”.vc_custom_1632817714040{padding-right: 5% !important;}”][vc_single_image image=”14261″ img_size=”large” add_caption=”yes”][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]In each stage, a single person was required to deploy a homogenous fleet of robots that needed to collaborate to navigate and explore the unknown environments. The robots need to collaborate to explore unknown environments and to provide accurate 3D maps to the operator in near real time and search for and classify artifacts and provide the command center with their geo-location in real time.

DARPA is considered the Olympics of robotics, and as such, contains teams composed from some of the best research labs and companies in the world. There was a good combination of contributors from industry, governments, and universities.

COVID really affected our participation in the event, but in the end, it turned out to be a massive boost for the team.

For starters, the third circuit, cave networks, which was supposed to be held in the USA, had to be modified and happened in a decentralized way, with each team creating their own event locally in their own country and sending the results back to DARPA.

For the finals, it was very difficult as two-thirds of our team could not travel to compete in the United States because of Australia’s travel restrictions. This caused us to put extra effort into developing the robots not to require much human intervention.

This turned out to be a great benefit because the challenge was designed to send and deploy robots where it is not possible to send humans.

COVID really highlighted the need and urgency to be able to undertake these critical missions without sending in people

As DARPA was looking for teams to enter their competition, Professor Ronald Arkin from the Georgia Institute of Technology (Georgia Tech) was in Australia visiting CSIRO and Queensland University of Technology. It was at the same time that Emesent and the CSIRO were working on a similar problem for underground mining.

Professor Arkin told the CSIRO and Emsent teams about the competition, and it made sense for the three to enter together because they each had strong capabilities in relevant areas. Georgia Tech had over 20 years developing collaborative robots. The CSIRO had done a lot of research on ground robots, and Emesent was furthering the development of autonomous exploration and mapping GPS-denied environments.
[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_single_image image=”14263″ img_size=”large” add_caption=”yes”][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]There were many strengths of the team:

  • The three groups had complementary skills and were all world leaders in their domain.
  • Compared to the other teams, most of our team was professional engineers and researchers with vast experience in this area.
  • We have all known each other for a long time and have a good working relationship.

From a technological point of view:

  • We used a platform-agnostic, localization, and mapping technology as our framework, meaning all of the robots needed to work as one system. Having some core technologies in common helped achieve this.
  • We took a modular approach to the challenge, we developed platform-independent payloads that were easily switched between robotic platforms. Standardizing autonomy and mapping into one plug and play payload allowed us to change robotic platforms many times through the event to suit the requirements of the course as we learned more.

But most importantly, we had a passion to succeed. We took the challenge seriously because we wanted to showcase the capabilities of Australian companies and research labs on a global stage.

We changed our robotic platforms as we proceeded through the different circuits. The final event consisted of tunnel, urban, and cave circuits, these all provide very challenging environments with unknown and very different terrains. This made robot platform selection important, as there is no one robot that can traverse all of those terrains. The key was to have diversity in locomotion, so we selected

  • 2x legged robot, Boston Dynamics’ Spot Mini (named Bluey & Bingo)
    These robots are good for narrow, urban environments and stairs.
  • 2x tracked vehicles, BIA5, a Brisbane-based company (named Bear & Rat)
    These robots are good for traversing rough terrains. They also have the capacity to carry additional equipment, like the nodes for the mesh network and the drones, to preserve their battery life and position them in the best place to get good results.
  • 2x Hovermap mounted drones, Emesent (named H1 & H2)

The drones can access areas ground vehicles can’t access, like stairwells, vertical shafts, and mezzanine levels. In fact, during the first day of preliminary rounds for the final, we were one of the only teams that discovered a vertical shaft and accessed a basement level to expand the area we had searched.

All of our robots, no matter how they moved around, had these advanced autonomy and AI capabilities:

  • non-GPS navigation
  • autonomous exploration
  • collaboration with robot team members
  • data sharing
  • mapping
  • cameras and other sensors for detecting artifacts

For the final, the teams in the Systems Competition completed one 60-minute run. The courses varied in difficulty and included 40 artifacts each. Teams earned points by correctly identifying artifacts within a five-meter accuracy and classifying these artifacts. In instances of a points tie, team rank was determined by (1) earliest time the last artifact was successfully reported, averaged across the team’s best runs on each course; (2) earliest time the first artifact was successfully reported, averaged across the team’s best runs on each course; and (3) lowest average time across all valid artifact reports, averaged across the team’s best runs on each course.

We had two days of preliminary rounds before the final competition. We did really well in the preliminary rounds and were in the lead as we went into the finals.

The day of the finals was very stressful and exciting.

We arrived in the morning at our holding area. We turned on the systems and did the standard checks. Then we made sure all of our batteries were charged. Then DARPA staff came around to ensure checks to make sure all of the robots complied with competition regulations.

Then we had to wait. We were the last team to compete in the final round because we had come first in the preliminary competitions. There was no communication between the different teams allowed to keep the integrity of the course.

We had no last-minute coding to do, so we brainstormed ideas for concepts for operations and the strategies of how to deploy the systems to maximize the outcomes. Other than that it was just jokes and getting into deep philosophical discussions. It was a great atmosphere with plenty to eat and plenty of laughs.

When it was our turn to compete, DARPA took us to the staging area of the course. We had time to set up and prepare the robots and ground station. Then we ran the course for an hour.

After it was all over, we were taken back to our holding area. It wasn’t long after that they announced that there were two teams tied for first- one of them was us. We were very happy and a little frustrated because they wouldn’t announce the tie-break method or final winner until the next day at the awards ceremony.

So we went out to celebrate.[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_single_image image=”14262″ img_size=”large” add_caption=”yes” alignment=”center”][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]Students did make up part of our team. In fact, the fleet operator, the one person who was able to interact with the robots during the competition, is a PhD student from the Queensland University of Technology, Brandon.

Yes, there will be ongoing collaboration between the team members. We already have other ongoing projects, so the organizations will continue working together.

We used the Rajant Kinetic Mesh network that consists of several nodes that create a mesh of communication network.

Each robot had a Rajant module attached to it that allowed it to communicate with the other robots. Each tracked robot was also carrying an additional four modules to be deployed in the field. The fleet operator monitored the strength of the field and remotely deployed the nodes when he knew a robot would lose connection.

Each robot can access the communication mesh created and send their information back to the ground station or the other robots.

Our team chose to report the information in three different ways.

  1. Our system georeferenced the XYZ coordinates in the DARPA global frame
  2. an image was taken of the area the artifact is in and the artifact highlighted with a box. A classification added was added
  3. An icon shows the artifact location in our 3D map

DARPA created a global 3D coordinate system that was the source of truth.

Our robots were specifically developed to operate autonomously beyond visual line of sight and communication range. Therefore the distance they can travel mainly depends on their endurance and the environment.

The final course was contained in an area about a several hundred meters by a several hundred meters, however it was very dense and complex. It contained the three different environments, tunnels, urban areas and natural caves. The course was designed in a way that meant the robots were out of visual line of sight of the operators after just 10 meters.

Our robots covered and explored over 90% of the course. This meant they were operating at several hundreds of meters from the operator.

The drones and ground robots worked as a team and helped each other to maximize the area covered and the detection of artifacts.

One of the most advanced features that our fleet has, and that differentiates it from other teams, is the ability to remotely launch a drone from the ground robots.

Drones have a limited running time, compared to ground robots, but they can access elevated areas and shafts, or traverse blocked or complex environments that a ground robot can not.

That’s why we had two drones carried into the course by the tracked robots, Rat and Bear. They were then started and launched remotely when space is more suitable or drone exploration is required. The ground robot can be considered a mother ship, in this case, as it helps the drone:

  • by extending its range of operations by carrying it to the areas of interest, along the obvious and simple course, saving the drones flight time.
  • by navigating and traversing very narrow passages and doorways, the ground robot can allow the drone to get through areas which are not flyable and access the larger areas after the narrow passages.
  • by deploying the communication nodes that build the mesh communication network that is used by the drones and other vehicles to communicate with the ground station.
  • by sharing the geo-reference information with the drone. As the drone started remotely inside the course, it was not able to see the ground control points at the start. The information shared by ground robots means the drones are able to geo-reference its data and detections to the DARPA global coordinate frame.

On the other hand, the drones are also helping the ground vehicles by

  • acting as a communication node that the ground vehicles can use to send and receive data from other robots or the ground control station.
  • map and search the areas that are hard or risky and not accessible to ground vehicles.
  • provide situation awareness and information to the ground robots that allows them to optimize their path and tasks by providing data and maps.

 

In certain scenarios the drone could be considered the scout of the team.

The remote launch of the drone from the ground robot is fully autonomous after it is triggered by the operator with just one click. To enable this, a very complex process has been automated. Here are just some of the complex tasks completed by the robots to ensure a successful launch:

  • the ground robot stops moving and checks it is on a level enough terrain, or with an acceptable slope
  • it unlatches the drone’s security couplings
  • Hovermap, the autonomy and mapping payload, is started
  • Hovermap syncs and merges its data with the ground robot
  • GCS and pre-flight checks are done
  • the drone is started (spinning the props)
  • pre-takeoff checks are conducted
  • automated takeoff is initiated
  • autonomous drone exploration is started
  • the ground robot reconfigures itself and resumes its mission

The SubT Challenge was an intensive program of 3 years of development and learning. During this program, we really pushed the boundaries of what’s possible with autonomous robots and learned many things along the way, not just about the technology but also how to manage complex programs and systems like this one and how to collaborate as a team to develop effective collaborative robots. My top 3 lessons are:

  1. Building an autonomous system is hard but doable but building a reliable autonomous system that can work in different complex environments takes it to another level. Although the systems deployed at DARPA events are research prototypes and not products, the requirements on reliability are very high and should be seriously considered.
  2. Extensive continuous field testing with iterative development is key to developing reliable autonomous field robots.
  3. It’s important to be agile, adaptive, and ready to change your methodology or approach when you hit a dead-end. Therefore, build your technology/stack in a way that allows you to switch quickly and effectively.

Although our team progressed very well in the course of the challenge and achieved excellent results in the Final, I feel that we could have done better by tweaking our approach in different areas:

  • A lot of time was spent by our teams on building, debugging, and fixing hardware. Putting more professional and dedicated people on the hardware from the beginning could have saved a lot of time and helped to focus on augmenting these robotic platforms with even more advanced autonomy capabilities.
  • I feel that we underestimated how complex DARPA could make the course, especially for drones. We would have worked on miniaturizing the airborne system early on while maximizing its flight time by exploring different propulsion and airframe concepts.
  • The GCS interface and all the Human-Machine interactions were not seriously researched and considered in this project. This could limit the effectiveness of the operations, especially in this one-to-many scenario. If we had the opportunity to do it again, we would have employed expert engineers in these areas to help us build an effective interface.
  • We lost a lot of time and energy trying to build our own hardware and platforms, whether communication modules, ground, or aerial platforms. Next time we will start by leveraging COTS systems and working with partners and OEMs to customize or modify the platforms to our needs if required.
  • Put more dedicated resources onto the project.

[/vc_column_text][/vc_column][/vc_row]

tstttt