Head-to-Head Races: How Galois Placed First in the GRAIC

 

A motorcycle equipped with our autopilot overtaking another vehicle

The Challenge

At Galois, we verify and assure complex critical systems. Autonomous vehicles are prime examples of complex systems which operate in uncertain and unstructured environments. Autonomous driving decisions use Deep Neural Networks (DNNs) which are data-driven and can react in unsafe ways when faced with out-of-distribution driving scenarios. Rigorously assuring the safety of these systems under a large class of realistic scenarios is an active area of research. Galois is contributing to DARPA projects like Assured Autonomy, and the Air Force Research Laboratory (AFRL)’s projects on human-machine trust. 

Recently, we participated in the GRAIC (Generalized RAcing Intelligence Competition) hosted at CPS-IoT week. We developed a racing strategy that would serve as a good case study for our Cyber Physical Systems (CPS) assurance research. Designing it required us to investigate:

  • Explainable Decision Making – a vehicle must be transparent in its decision-making. In the event of a collision, the vehicle must explain its decisions leading to the failure.
  • Automated Scenario Testing – safety-critical systems like autopilots must be tested thoroughly. Automation can generate appropriate coverage of critical scenarios.

GRAIC is a good case study for both of these topics because of the GRAIC simulator. The simulator provides a Perception Oracle that would eliminate the need to process raw sensor data and enable us to focus on decision-making and control. The simulator provides many tracks and vehicles to test and uncover decision flaws. Lastly, the simulator scenarios used for this event differ from those available at development time to punish autopilot designs for overfitting to particular testing scenarios.

So, after playing around with a few backronyms, we entered with the team name MEthodolical Racing Computer Intelligence (MERCI), and we were off to the (autonomous) races!

The Rules

The GRAIC Simulator provides accurate obstacle, pose, and road perception allowing competitors to focus on decision, planning, and control strategies. Thus, the competition would score the autopilot on how quickly their car could complete a race track with moving obstacles. The races also fell into several categories: solo—where the autopilot races on a track with generated obstacles—and head-to-head—where the autopilot races on a track against a vehicle equipped with another team’s autopilot. 

For an entry to be valid, a team must implement a Python controller class that a vehicle program could instantiate and call. For every simulation update, the agent would receive:

  • Current Vehicle State – our vehicle’s current position, velocity, and orientation.
  • Current Lane Information – the current lane we are in (left, center, or right) and a list of lane markers up to 10 meters ahead.
  • Obstacle List – a list of bounding boxes representing the vehicles and pedestrians nearby. Note that the obstacles can be seen further ahead than the lane markers.
  • Waypoint – a single point to drive towards. The controller would control the vehicle’s steering wheel and throttle value to get the car to the waypoint while avoiding the areas in the obstacle list.

The Winning Solution

Given the simulator’s Perception Oracle, our control architecture uses road boundary and obstacle motion predictors to inform our top-level control, path planner, and steering and throttle/brake controller.

The control architecture of our autopilot (green) and how it fits into the provided GRAIC simulator (blue). 

Road and Obstacle Motion Predictors

Since the simulators limit available lane information to about 10m in front of the vehicle, improving the road information horizon is advantageous as lane paths inform vehicles’ motions. To make better long-term decisions, we designed the autopilot to predict the road boundaries and the motion of the obstacles. The estimated road boundaries give the motion planner a larger space to optimize a racing line and avoid obstacles.

The information available to the autopilot to plan our racing motion (top) and what our predictors add (bottom).

Decision Making

Next, our autopilot gives the scene information and predictions to a top-level controller, providing high-level guidance on handling the situation. Based on intuition and testing feedback, we use the following modes:

  • NORMAL – the road has two lanes or less occupied and no pedestrians.
  • CAUTIOUS – the road has more than two lanes occupied and no pedestrians.
  • STOP – the road has a pedestrian.

Other autopilot components then use this information. 

For example, when the road has a pedestrian, the top-level controller outputs a STOP mode which the throttle controller interprets as decelerating. This improves the car’s braking distance and turning radius. 

Example of the Vehicle Mode Switching to STOP

Path Planning

We can now plan the vehicle’s motion with the obstacle motion prediction, road boundary information, and high-level vehicle modes. Over our lane prediction horizon, we generate a candidate collection of points that a planner will attempt to drive. Then, we use a convergent RRT* path planner to solve an obstacle-free path between our car’s current position and the candidate points. The path with the smallest travel distance is used.

In some cases, the planner cannot find a path given the time budget we impose. When this happens, the planner switches to a non-convergent, explorational RRT planner and attempts to find a path with it. Given the random nature of RRT, the path with the shortest length can vary between candidate points if it runs several times. This produces decision-flickering, where the car randomly jumps between multiple paths. Techniques exist to deal with this problem, but we didn’t implement them. Occasional decision-flickering is a flaw of our submitted autopilot. 

Path planner in light traffic. The planner takes the car to the right lane to overtake the obstacles, as shown by the red line. Then, it produces a turn aimed at the turn apex. Note how the RRT path changes from a straight to a curved path as the road predictor component produces an accurate extrapolation of the supplied lane information.

 


Path planner in a more complicated traffic scenario. The planner takes the car between lanes to overtake the vehicles. Decision-flickering occurs as several overtake paths are considered since the planner considers non-convergent paths. 

Steering and Throttle Control

Finally, the path plan needs to be translated to signals we can apply to the car’s steering wheel and throttle. Typically, two controllers are employed to control speed and steering independently, responsible for speed and path tracking respectively. The provided baseline uses PID controllers for these two components. The car oscillates or oversteers occasionally which is an indicator of suboptimal parameter tuning. Replacing the baseline lateral low-level controller with well-known alternatives, we experimented using pure pursuit and Stanley controllers. However, we did not observe improved averaged performance over the vehicles and scenarios, so we kept the provided low-level controllers. We are confident that the low-level control can be improved. To perform well over a large class of vehicles and racetracks, the control approach must tune itself to each vehicle’s dynamics as it identifies them. Adaptive control is an area we would like to explore in future competitions.

 


Example of a path following failure. An obstacle initiates a lane change into the planned path. The planner adjusts with a lane change, but the vehicle cannot turn away in time. In this more complicated scenario, the path quality diminishes as an exploratory RRT planner is used since the convergent planner has failed to find a path.

 

Final Results

We placed first in the head-to-head category! The raceline planning and aggressive throttle tuning made it pull ahead of the other autopilots and win the race. 

The autopilot made some poor decisions for the solo category. Given our planner’s preference to overtake obstacles, we observed some crashes as it attempted to turn around the nearby vehicles. In two cases, the car failed to finish the race by crashing into the road barriers. This happens in some cases because the autopilot’s low-level control fails to turn as tightly as the path planner requires; we knew about this issue but couldn’t solve it in time for the competition.

When the low-level control is sufficient, our planning strategy is effective. Adjusting the high-level control to be more cautious will improve the autopilot’s chances of not crashing, especially now that more dynamic obstacles are being added to the racing scenarios.  

Our code is available here. We also reimplemented it in Rust for improved performance–a faster program can find a better path in the same time window.

Autopilot Feature Wishlist

For the future, we would like to implement the following features in the autopilot:

  • Detecting unrealizable paths: The path provided by the planner, if unrealizable by the low-level controller,  will lead to unpredictable vehicle behaviors, including possible collisions. We would like to detect such paths.

  • Adaptive Self-tuning low-level control: Instead of fixed steering and throttle controllers, we would like to have self-tuning and optimized controllers for different car models. Using the same control policy for a motorcycle and a Tesla Cybertruck inevitably results in undesirable oscillations and turning responses.

  • Recovery system: A context-based recovery system that can intelligently respond to unmodeled and unexpected environments that lead to failures of planning and/or control.

Finishing a race!