Archives

Authoritative Source of Truth Study

This is the final report for the Authoritative Source of Truth (ASoT) study. From September, 2019 to December 2020 Adventium Labs conducted a study to elicit, refine, and exercise requirements for an ASoT. This study was part of the Army-Funded Joint Multi-Role (JMR) Mission Systems Architecture Demonstration (MSAD) Capstone demonstration.
The objective of this study was to define requirements for an ASoT at a sufficient level of detail as to enable acquisition or use of ASoT capabilities in support future Army efforts. The next objective was to provide proof-of-concept demonstrations that the identified requirements can be satisfied. During the first six months of the effort we elicited requirements by conducting interviews with Army stakeholders and surveying the existing body of ASoT research. In the following nine months we assembled and presented three demonstrations, each highlighting a different aspect of the ASoT requirements as prioritized by Army stakeholders (in order: requirements management, analysis and collaboration, traceability and digital thread).

Read More

Incremental Causal Reasoning

  • Thomas Dean

Causal reasoning comprises a large portion of the inference performed by automatic planners. In this paper, we consider a class of inference systems that are said to be predictive in that they derive certain causal consequences of a base set of premises corresponding to a set of events and constraints on their occurrence. The inference system is provided with a set of rules, referred to as a causal theory, that specifies, with some limited accuracy, the cause and effect relationships between objects and processes in a given domain. As modifications are made to the base set of premises, the inference system is responsible for accounting for all and only those inferences licensed by the premises and current causal theory. Unfortunately, the general decision problem for nontrivial causal theories involving partially ordered events is NP-complete. As an alternative to a complete but potentially exponential-time inference procedure, we describe a limited-inference polynomial-time algorithm capable of dealing with partially ordered events. This algorithm generates a useful subset ofthose inferences that will be true in all total orders consistent with some specified partial order. The algorithm is incremental and, while it is not complete, it is provably sound.

Read More

An Analysis of Time Dependent Planning

  • Thomas Dean

This paper presents a framework for exploring issues in time-dependent planning: planning in which the time available to respond to predicted events varies, and the decision making required to formulate effective responses is complex. Our analysis of time-dependent planning suggests an approach based on a class of algorithms that we call anytime algorithms. Anytime algorithms can be interrupted at any point during computation to return a result whose utility is a function of computation time. We explore methods for solving time-dependent planning problems based on the properties of anytime algorithms.

Read More

Solving Time-Dependent Planning Problems

    A planning problem is time-dependent, if the time spent planning affects the utility of the system’s performance. In [Dean and Boddy, 1988], we define a framework for constructing solutions to time-dependent planning problems, called expectation-driven iterative refinement. In this paper, we analyze and solve a moderately complex time-dependent planning problem involving path planning for a mobile robot, as a way of exploring a methodology for applying expectation-driven iterative refinement. The fact that we construct a solution to the proposed problem without appealing to luck or extraordinary inspiration provides evidence that expectation-driven iterative refinement is an appropriate framework for solving time-dependent planning problems.

    Read More

    Approximation Algorithms for Planning and Control

      A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of a class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answeres improve as more time is allotted.  In this paper, we describe an architecture for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture we describe is quite general, producing optimal chedules for a set of algorithms under widely varying conditions.

      Read More

      Scheduling with Partial Orders and a Causal Model

      In an ongoing project at HTC, we are implementing a prototype scheduling system for a NASA domain using the “Time Map Manager” (TMM). TMM representations are flexible enough to permit the representation of precedence constraints, metric constraints between activities, and constraints relative to a variety of references (e.g. Mission Elapsed Time vs. Mission Day). There is also support for a simple form of causal reasoning (projection), dynamic database updates, and monitoring certain database properties as changes occur over time. The greatest apparent advantage to using the TMM is the flexibility added to the scheduling process: schedules are constructed by a process of “iterative refinement,” in which scheduling decisions correspond to constraining an activity either with respect to another activity or with respect to some timeline. The schedule gradually “hardens” as constraints are added. Undoing a decision means removing a constraint, not removing an activity from a specified place on the timeline. For example, we can move an activity around on the timeline by deleting constraints and adding new ones, and other activities constrained with respect to the one we move will move as well (assuming they can, given current constraints).

      Read More

      Managing Disjunction for Practical Temporal Reasoning

      Ambiguous conclusions are inescapable in temporal reasoning. Lack of precise information about what events happen when results in uncertainty regarding the events’ effects. Incomplete information and nonmonotonic inference result in situations where there is more than one set of possible conclusions, even when there is no temporal uncertainty at all. In an implemented system, this ambiguity is a computational problem as well as a semantic one. We discuss some of the sources of this ambiguity, which we treat as explicit disjunction, in the sense that ambiguous information can be interpreted as defining a set of possible inferences. Three ways of handling this disjunction are to represent it explicitly, to remove it by limiting the expressive power of the system, or to approximate a set of disjuncts using a weaker form of representation. We have employed primarily the latter two of these approaches to implement an expressive and efficient temporal reasoning engine that performs sound inference in accordance with a well-defined formal semantics.

      Read More

      Handbook for the Computer Security Certification of Trusted Systems

        The Navy has designated the Naval Research Laboratory (NRL) as its Center for Computer Security Research and Evaluation. NRL is actively developing a Navy capability to certify trusted systems. This paper describes the NRL effort to understand assurance, certification, and trusted system certification criteria through the production of the Handbook for the Computer Security Certification of Trusted Systems. Through this effort, NRL hopes to discover new and more efficient ways of satisfying the assurance requirement for a high assurance system.

        Read More

        The ECA Critical Requirements Model

          The ECA is an embedded computing device that processes message traffic for a network that must enforce end-to-end user message confidentiality. The ECA uses a commercial, off-the-shelf cryptographic device to transform sensitive data from the Red Domain of the network so that it can be transmitted over the untrusted communication links of the Black Domain. For transmission purposes, certain parts of a message, namely the message header, must be bypassed around the cryptographic device. The primary critical requirement for the ECA, Restricted Red-to-Black Flow (RRTBF), requires that the bypassed portion of each message must satisfy certain format restrictions, and that the rate of bypass must be constrained. In this report, we present an informal model of the ECA’s critical requirements together with the assumptions under which the model was constructed. We than formalize this model by using the CSP Trace Model of computation.

          Read More