Providing Safety and Verification for Learning-Enabled Cyber-Physical Systems

  • Matthew Clark

Machine learning has revolutionized cyber-physical systems (CPS) in multiple industries – in the air, on land, and in the deep sea. 

And yet, verifying and assuring the safety of advanced machine learning is difficult because of the following reasons: 

  1. State-Space Explosion: Autonomous systems are characteristically adaptive, intelligent, and/or may incorporate learning capabilities. 
  2. Unpredictable Environments: The power of autonomous agents is the ability to perform in unknown, untested environmental conditions. 
  3. Emergent Behavior: Interactions between systems and system factors may induce unintended consequences.  
  4. Human-Machine Communication: Handoff, communication, and interplay between operator and autonomy become a critical component to the trust and effectiveness of an autonomous system. 

Galois seeks to solve these problems with Leveraging Symbolic Representations for Safe and Assured Learning (“Assured Learning”). As part of DARPA’s Assured Autonomy program, Assured Learning aims to develop an assurance framework specifically for CPS systems that rely on machine learning or learning-enabled controllers. 

The Assured Learning project is designed to test and verify the safety of cyber-physical systems, but it could also go further. Assured Learning aims to explore the following approaches: 

  • Use Run-Time approaches to detect and recover from errors in operation. 
  • Automate testing of learning enabled controllers to increase assurance
  • Leverage testing to create symbolic representations learning enabled controllers that enable deeper, for formal verification of their behavior. 

Assured Learning will set a new standard by providing learning-enabled CPS with the ability to learn from mistakes. 

Galois and research partners at Purdue University, University of Texas at Austin, Oregon State University, and Rice University have developed a three-part process to explore how such a system can work: 

Training

Galois will help design and implement a toolchain that introduces an abstract interpretation training strategy that can train neural networks and make them provably correct. 

Testing and Verification

Inspired by the concept of concolic testing (e.g. a mix of concrete and symbolic), Galois plans to use a symbolic model of the neural network and its environs. 

Run-Time Assurance

Run-Time Assurance technologies will do two things:  monitor software for expected failures and automatically recover from those failures. Galois has chosen a domain-specific language named CoPilot (and co-created with NASA) to develop Run-Time Assurance technologies for this purpose. 

Galois believes that Assured Learning can extend assurance to the entire autonomy domain. One day, this could even apply to applications that take humanity to the furthest reaches of space.  

This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.