DETECT, part of DARPA’s HARDEN program, aims to create tools that help programmers identify and reason about design and implementation choices that increase the exploitability of their systems.

While specific software flaws create vulnerabilities, the design of a system can impact whether (and to what extent) those vulnerabilities can be exploited. Subtle differences between a system’s ideal design and its implementation can lead to radically different run-time behaviors. These emergent behaviors arise from the interaction of many choices made in the design or implementation of the system. They magnify a vulnerability’s impact by giving nefarious individuals far more control over a software error’s effect. Sometimes more than one might expect. In the worst case, a system’s emergent behaviors may act like a “programmable weird machinemaking it susceptible to routine exploitation.

Design decisions that lead to weird machines often recur across different systems. As a result, hackers can use techniques developed to attack one system to exploit other systems where developers made similar decisions. Current methods of detecting potential vulnerabilities include compiler-based “sanitizers,” which are used during testing to detect language-level invariants and surface flaws so they can be corrected before deployment. Unfortunately, language-level sanitizers must be designed by experts with knowledge of specific exploit techniques. This ultimately limits their scope. 

Galois’s SEEC project (part of DARPA’s AIMEE program) used ideas from the secure compilation and program synthesis fields to detect and reason about weird machines without pre-existing knowledge of potential system flaws. However, SEEC still needed users to hand-write models of their desired abstractions and implementation.  

Part of DARPA’s HARDEN program, Discovering and Explaining Threats of Emergent Computation to enable Transformations (DETECT) will build on SEEC’s approach to understanding software exploitation and weaknesses by comparing the behavior of computations at different levels of abstraction in models created by other HARDEN performers. DETECT will improve SEEC’s scalability by replacing expensive and exhaustive program synthesis techniques with techniques from state-of-the-art differential fuzzing tools. DETECT will also develop novel techniques to identify specific design decisions that lead to emergent computation. This will allow system designers to fix these problems earlier. 

Discovering emergent computations

DETECT will automatically explore models to identify interactions where the behavior of a component in one model differs from the behavior observed in the more abstract model. 

DETECT will extend existing fuzzing-based techniques for finding bugs in programs to explore how to automatically find unexpected behaviors in programs by comparing them to a reference. Fuzzing is a technique that systematically generates inputs for a program with the goal of identifying invalid or unexpected behaviors.

Galois’s emergent computation discovery process would produce collections of inputs and observations that demonstrate the two models’ common behaviors. It would also produce collections of inputs and outputs that demonstrate emergent computations in the less abstract model.

Rather than simply explore each model and look for crashes, DETECT will use a multi-step approach that explores the program to figure out what it can do. It would then search for inputs that cause corresponding behavior in the reference design or implementation. DETECT will adapt techniques traditionally used for proving the security and correctness of programs and compilers. This will vastly improve the search’s efficiency. 

Providing developer feedback

While the test cases created in the previous step provide executable evidence of emergent computation stemming from the target system’s design, they are not immediately actionable. Developers still need to carefully investigate artifacts to understand how their design choices interact to enable the discovered behaviors. DETECT will use techniques to analyze the intended and emergent computations and generate explanations of abstraction-level invariants that should hold in the system but do not due to emergent computation. To do so, DETECT will build on techniques for dynamic detection of previously-developed invariants to aid debugging and testing.

Refactoring vulnerable designs 

DETECT will use the invariants generated in the previous step to suggest refactorings. These refactorings would ensure that invariants needed to limit emergent execution hold at high-levels of abstraction and are preserved in the final implementation.

Once developers have refactored their design or implementation to fix a source of emergent computation, DETECT will re-execute generated test cases to validate that the changes have disrupted the emergent computation. If the changes are successful, the developer can return to a previous step of DETECT workflow to find and repair other sources of emergent computation.

Future plans include the ability to assess the severity of emergent computations and mitigate any remaining emergent computations. 

You can read more about the DETECT project in our announcement.

This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) and Naval Information Warfare Center Pacific (NIWC Pacific) under N66001-22-C-4027. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of DARPA and NIWC Pacific.

Distribution Statement “A” (Approved for Public Release, Distribution Unlimited).  If you have any questions, please contact the Public Release Center.