Leveraging Symbolic Representations for Safe and Assured Learning

Part of DARPA’S Assured Autonomy program, this project seeks to provide a safety and verification framework for learning-enabled cyber-physical systems in several autonomous domains.

Machine learning and learning-abled controllers have revolutionized cyber-physical systems (CPS) in multiple industries – particularly in air, ground, and deep-sea environments. Some examples of CPS that depend on machine learning autonomy include SpaceX rockets docking autonomously to the International Space Station, the onboard systems of self-driving cars, and unmanned air vehicles (UAVs). 

However, verifying and assuring the safety of learning-enabled cyber-physical systems has proven difficult. For example, multiple industries want UAVs to perform retailer deliveries, as well as activities that could place them in the same airspace as commercial jetliners, helicopters, and military aircraft. 

UAVs, and other technologies that rely on advanced learning-enabled CPS, face the following challenges: 

  • Testing and verifying the safety of learning-enabled CPS.
  • Detecting and/or correcting anomalies that occur in the systems’ environments. 
  • The lack of an underlying semantic framework that measures assurance. 

A Framework for Assured Learning

Galois seeks to solve these problems with Leveraging Symbolic Representations for Safe and Assured Learning (“Assured Learning”). As part of  DARPA’s Assured Autonomy program, Assured Learning aims to develop an assurance framework specifically for CPS systems that rely on machine learning or learning-enabled controllers. 

Assured Learning is designed to test and verify the safety of cyber-physical systems, but it could also go further. Assured Learning aims to explore the following approaches: 

  • Use Run-Time approaches to detect and recover from errors in operation. 
  • Automate testing of learning enabled controllers to increase assurance
  • Leverage testing to create symbolic representations learning-enabled controllers that enable deeper, for formal verification of their behavior. 

Assured Learning will set a new standard by providing learning-enabled CPS with the ability to learn from mistakes. 

Highway for the Danger Drone 

The civilian aviation industry has the highest levels of safety standards (second only to human space exploration). The certification for safety-critical software must meet stringent verification and safety assurance processes prior to being certified for use in civilian systems. These assurance guidelines make it extremely difficult, if not impossible, for current neural net technologies to be allowed to be used as part of a flight control loop. 

Advanced data-driven machine learning algorithms have been the key to phenomenal accomplishments. However, overall, many industry domains lag behind the ability to assure and verify their safety.  For instance, learning-enabled UAVs do not have a high assurance of system-level safety, nor can they interpret the effect of environmental disturbances on system performance and safety. In other words, there is no way to reliably know for sure that advanced unmanned aircraft won’t cause mid-air crashes with piloted aircraft or even crash into the ground. 

Can We Take Assurance High Enough?

Assured Learning’s ultimate goal is that sufficient transparency, assurance, and verification is part of any learning-enabled CPS. Galois and research partners at Purdue University, University of Texas at Austin, Oregon State University, and Rice University have developed a three-part process to explore how such a system can work. 

Training

Galois will help design and implement a toolchain that introduces an abstract interpretation training strategy that can train neural networks and make them provably correct. 

Testing and Verification

Inspired by the concept of concolic testing (e.g. a mix of concrete and symbolic), Galois plans to use a symbolic model of the neural network and its environment. The model will generate a coverage-maximizing test suite for closed-loop systems. A proof search and generation mechanism for assurance cases will be created during the design phase.  The mechanism will operate in response to new observations, Run-Time simulation tests, and prediction/classification outputs produced by the learning system. 

Run-Time Assurance

Run-Time Assurance technologies will do two things:  monitor software for expected failures and automatically recover from those failures. Galois has chosen a domain-specific language named CoPilot (and co-created with NASA) to develop Run-Time Assurance technologies for this purpose. When design-time assumptions are violated, the level of assurance will be reassessed, and the program will either inform the user or alter its behavior. This process will be coupled with open-world classification algorithms that can respond more powerfully to unanticipated situations. The assessment of “anomaly effects” on system assurance will be supported by a formal framework that can reason about assurance. The reasoning framework will capture the inherent uncertainty of current learning-enabled CPS. 

Keep Our Eyes on the (Safely) Circling Skies

Many scientists, researchers, and engineers know that the fundamental structure of machine learning algorithms must change in order for them to have greater levels of reliability. Galois believes that Assured Learning will extend this assurance to the entire autonomy domain. One day, this could extend to applications that take humanity to the furthest reaches of space.  

Distribution Statement “A” (Approved for Public Release, Distribution Unlimited). “The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.”