Archives

Verifying Consistency Between Models

Numerous aircraft development programs have suffered cost and schedule delays due in part to unplanned rework that occurred during integration and acceptance testing. Many of the errors that required rework can be traced back to inconsistencies between different specifications and models developed by or for different disciplines and suppliers early in the development process. We describe a novel method for specifying and verifying complex consistency properties between different kinds of models. This method makes use of a gray-box model integration framework and an SMT verification tool. We report on the application of this method to one specific challenge problem, verifying that a logical computer system architecture specified in AADL and a solid model specified in Creo together satisfy a particular consistency property.

Read More

High-Assurance Modeling and Rapid Engineering for Embedded Systems Using AADL

Many domains rely on real-time embedded systems that require high levels of assurance. Assurance of such systems is challenging due to the need to support compositionality related to platform-based development, software product lines, interoperability, and system-of-system concepts. The Architecture and Analysis Definition Language (AADL) provides a modeling framework that emphasizes componentbased development, modeling elements with semantics capturing common embedded system threading and communication patterns, and standardized run-time service interfaces. Through its annex languages and tool plug-in extensibility mechanisms, it also supports a variety of architecture specification and analyses including component behavioral contracts, hazard analysis, schedulability analysis, dependence analysis, etc. The AADL vision emphasizes being able to prototype, assure, and deploy system implementations derived from the models. This talk will present an overview of HAMR (High-Assurance Modeling and Rapid Engineering) — a multipleplatform code-generation, development, and verification tool-chain for AADL-specified systems. HAMR’s architecture factors code-generation through AADL’s standardized run-time services (RTS). HAMR uses the AADL RTS as an abstract platform-independent realization of execution semantics which can be instantiated by backend translations for different platforms. Current supported translation targets are: (1) Slang (a safety-critical subset of Scala with JVM-based deployment as well as C code generation designed for embedded systems), (2) a C back-end for Linux with communication based on System V inter-process communication primitives, and (3) a C back-end for the seL4 verified micro-kernel being used in a number of US Department of Defense research projects. The C generated by HAMR is also compatible with the CompCert verified C compiler. HAMR supports integrations with other languages (e.g., CakeML) through its generated AADL RTS foreign-function interface facilities.

Read More

Verified Cryptographic Code for Everybody

Along with Amazon Web Services, Galois is publishing a new paper titled “Verified Cryptographic Code for Everybody,” and we really do mean everybody. One benefit of the way we’ve done this work is that everyone who was using the code already is now using verified code. The library we’ve verified parts of is called AWS-LibCrypto. Much of that library is code that originated from BoringSSL, and BoringSSL is composed of code originating from OpenSSL. By almost any metric, these are the most-used cryptography libraries in the world, and we have verified some of the most critical code that they contain!

In the proof, we used SAW to verify hand-written cryptography safe and correct with regards to Cryptol specifications. The optimizations and details of the code we were verifying made the work very challenging but equally rewarding and important. We’re happy that our proofs have helped provide assurance for commonly used cryptography. The paper describes how our proofs work, and it also covers some of the challenges and engineering we did to accomplish such a challenging task. I hope you give it a read, and please reach out with your questions.

Read More

Authoritative Source of Truth Study

This is the final report for the Authoritative Source of Truth (ASoT) study. From September, 2019 to December 2020 Adventium Labs conducted a study to elicit, refine, and exercise requirements for an ASoT. This study was part of the Army-Funded Joint Multi-Role (JMR) Mission Systems Architecture Demonstration (MSAD) Capstone demonstration.
The objective of this study was to define requirements for an ASoT at a sufficient level of detail as to enable acquisition or use of ASoT capabilities in support future Army efforts. The next objective was to provide proof-of-concept demonstrations that the identified requirements can be satisfied. During the first six months of the effort we elicited requirements by conducting interviews with Army stakeholders and surveying the existing body of ASoT research. In the following nine months we assembled and presented three demonstrations, each highlighting a different aspect of the ASoT requirements as prioritized by Army stakeholders (in order: requirements management, analysis and collaboration, traceability and digital thread).

Read More

Incremental Causal Reasoning

  • Thomas Dean

Causal reasoning comprises a large portion of the inference performed by automatic planners. In this paper, we consider a class of inference systems that are said to be predictive in that they derive certain causal consequences of a base set of premises corresponding to a set of events and constraints on their occurrence. The inference system is provided with a set of rules, referred to as a causal theory, that specifies, with some limited accuracy, the cause and effect relationships between objects and processes in a given domain. As modifications are made to the base set of premises, the inference system is responsible for accounting for all and only those inferences licensed by the premises and current causal theory. Unfortunately, the general decision problem for nontrivial causal theories involving partially ordered events is NP-complete. As an alternative to a complete but potentially exponential-time inference procedure, we describe a limited-inference polynomial-time algorithm capable of dealing with partially ordered events. This algorithm generates a useful subset ofthose inferences that will be true in all total orders consistent with some specified partial order. The algorithm is incremental and, while it is not complete, it is provably sound.

Read More

An Analysis of Time Dependent Planning

  • Thomas Dean

This paper presents a framework for exploring issues in time-dependent planning: planning in which the time available to respond to predicted events varies, and the decision making required to formulate effective responses is complex. Our analysis of time-dependent planning suggests an approach based on a class of algorithms that we call anytime algorithms. Anytime algorithms can be interrupted at any point during computation to return a result whose utility is a function of computation time. We explore methods for solving time-dependent planning problems based on the properties of anytime algorithms.

Read More

Solving Time-Dependent Planning Problems

    A planning problem is time-dependent, if the time spent planning affects the utility of the system’s performance. In [Dean and Boddy, 1988], we define a framework for constructing solutions to time-dependent planning problems, called expectation-driven iterative refinement. In this paper, we analyze and solve a moderately complex time-dependent planning problem involving path planning for a mobile robot, as a way of exploring a methodology for applying expectation-driven iterative refinement. The fact that we construct a solution to the proposed problem without appealing to luck or extraordinary inspiration provides evidence that expectation-driven iterative refinement is an appropriate framework for solving time-dependent planning problems.

    Read More

    Approximation Algorithms for Planning and Control

      A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of a class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answeres improve as more time is allotted.  In this paper, we describe an architecture for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture we describe is quite general, producing optimal chedules for a set of algorithms under widely varying conditions.

      Read More

      Scheduling with Partial Orders and a Causal Model

      In an ongoing project at HTC, we are implementing a prototype scheduling system for a NASA domain using the “Time Map Manager” (TMM). TMM representations are flexible enough to permit the representation of precedence constraints, metric constraints between activities, and constraints relative to a variety of references (e.g. Mission Elapsed Time vs. Mission Day). There is also support for a simple form of causal reasoning (projection), dynamic database updates, and monitoring certain database properties as changes occur over time. The greatest apparent advantage to using the TMM is the flexibility added to the scheduling process: schedules are constructed by a process of “iterative refinement,” in which scheduling decisions correspond to constraining an activity either with respect to another activity or with respect to some timeline. The schedule gradually “hardens” as constraints are added. Undoing a decision means removing a constraint, not removing an activity from a specified place on the timeline. For example, we can move an activity around on the timeline by deleting constraints and adding new ones, and other activities constrained with respect to the one we move will move as well (assuming they can, given current constraints).

      Read More