- Technical Report
- May 2024
The central conclusions of this effort are that model-based methods for procurement and systems integration are effective for improving systems engineering outcomes; model-based methods identified many of the risks that were manifested in the lab. Most of the time or capability-loss issues encountered in the lab were related either to software configuration or network configuration. We demonstrated a workable path to successful application of MBSE and ACVIP for major embedded systems integration activities. We observed successful adoption and application of MBSE and ACVIP by FARA performers and successfully applied performer- generated ACVIP artifacts. Galois’s role in OSVD allowed us to contribute model-based risk virtual integration assessments to help reduce risk for physical integration efforts. By applying ACVIP to OSVD, we were able to correctly predict integration risks that were later realized in lab activities (specifically network configuration errors). In this case, we found that the best predictor of problems in physical asset integration was ambiguity rather than incompatibility, in design artifacts. The key remaining challenges for broad realization of the benefits of MBSE and ACVIP are dealing with culture change and scalable deployment of digital engineering environments across organizational boundaries.
Read More
- Technical Report
- GALOIS-12-22-23
- Dec 2023
Today’s most powerful machine learning approaches are typically designed to train stateless architectures with predefined layers and differentiable activation functions. While these approaches have led to unprecedented successes in areas such as natural language processing and image recognition, the trained models are also susceptible to making mistakes that a human would not. In this paper, we take the view that true intelligence may require the ability of a machine learning model to manage internal state, but that we have not yet discovered the most effective algorithms for training such models. We further postulate that such algorithms might not necessarily be based on gradient descent over a deep architecture, but rather, might work best with an architecture that has discrete activations and few initial topological constraints (such as multiple predefined layers). We present one attempt in our ongoing efforts to design such a training algorithm, applied to an architecture with binary activations and only a single matrix of weights, and show that it is able to form useful representations of natural language text, but is also limited in its ability to leverage large quantities of training data. We then provide ideas for improving the algorithm and for designing other training algorithms for similar architectures. Finally, we discuss potential benefits that could be gained if an effective training algorithm is found, and suggest experiments for evaluating whether these benefits exist in practice.
Read More