Digital Engineering: From point solutions to trusted processes

In the world of cyber physical systems, the aim of Digital Engineering (DE) is to speed up the development process while simultaneously improving security, reliability, safety and performance. The core mechanism enabling this outcome is a refinement based design and implementation process whereby high-level requirements and reference architectures are refined into low-level requirements and system designs, which are then refined further into code. What makes this approach so appealing is the early identification and analysis of derived requirements that facilitate:

  • Integration and Interoperability: Formal descriptions of interfaces, data exchange formats and functional contracts enable capturing and reasoning about inter-component dependencies before they are designed, built and integrated.
  • Test and Architecture Driven Development: Architectural diagrams and functional contracts can be used to automatically create and test implementation code.  
  • Accelerated Compliance: Code-level validation artifacts are derived from and are traceable to models and the initial requirements.  
  • Rapid Modernization: Legacy systems often lack documentation and instructional knowledge critical for streamlined updates. Model-based designs provide a precise and standardized mechanism for capturing system requirements and blueprints.  
  • Maintenance and Sustainment: Automated tooling and workflows can be leveraged to evaluate the impact of an architecture or implementation change. 

In other words, a standardized digital engineering approach should trade a small amount of up-front cost in the modeling and analysis phase for huge savings in implementation, maintenance and sustainment. It’s important to note that digital engineering isn’t a modeling paradigm or analysis framework. It is a workflow and a process comprising numerous design and validation activities. 

There are myriad tools and point solutions that plug into this process, but no single tool is a panacea. Only when tools are used in aggregate, as a part of a well-defined and well-understood workflow and methodology, are the full benefits of digital engineering unlocked. The challenge we face today is that early adopters of digital first approaches are dually tasked with building a bridge and crossing it. Different groups use different modeling paradigms and analysis techniques resulting in siloed toolchains that rarely inter-operate. 

So how are systems engineers and project leaders to navigate this maze of promise and pitfalls? The answer lies in adopting a coherent and trusted digital engineering process rather than falling for the allure of disparate point solutions that fail to work together. Building a trusted process—much like the model-based work of digital engineering itself—requires one to first take a step back and break the problem down into its component parts.

A successful digital-first development process is necessarily composed of four key ingredients: 

  1. Generating a model
  2. Analyzing and drawing conclusions from the model 
  3. Leveraging the model to implement the system 
  4. Maintaining the model and implementation throughout its entire life cycle 

While the specific tools and techniques used in each of these steps can vary, each must function well both standing on its own and as part of the whole process. In short: it’s important to get the right ingredients in order to make the recipe work as a whole. Below, I walk through the state of each of these ingredients: highlighting current challenges and opportunities. All together, this outline should serve as a roadmap of best practices for building a robust, trustworthy, digital-first development pipeline.

Modeling 

Models are the foundation over which all digital engineering capabilities are built. Their purpose is to provide a precise and unambiguous description of what a system does, and how all of its parts work together. As such, models foster more effective communication between vendors and government agencies enabling more rapid integration and better traceability to system requirements. Further, models enable automated analysis of system properties before a line of code ever gets written. This enables more informed trade space analysis, fewer software rewrites, and better change impact analysis. 

At least, well-constructed models do all those things. Sadly, many of the models being written today don’t actually enable these promises. A poorly constructed model serves no greater purpose than does a powerpoint slide. If models are not underpinned with well understood or standardized semantics, then they fail to achieve their mission. Models that can’t be understood – by both humans and machines – don’t promote communication and don’t enable analysis. 

Part of the reason we don’t have perfect models is that we don’t have perfect modeling paradigms. By far the most popular modeling language used today is SysMLv1 — a graphical only modeling framework shipped with abstract modeling constructs that need to be specialized into a particular domain. This specialization happens through profiles, which are extensible annotations that get embedded within the core constructs of SysMLv1. With profiles, each vendor produces models with a different dialect that often isn’t interpretable by the humans or machines attempting to work with the models.     

On the other end of the spectrum, AADL is a standardized modeling language for embedded and cyber physical systems that comes with rich semantics. It defines both detailed part types and behaviors as well as an execution semantics for the underlying software constructs. Unfortunately, it lacks adoption and has no commercially supported development environment.  

Enter SysMLv2 – a freshly minted language that presents a best of both worlds approach. It has a deep semantic foundation, both graphical and textual representations and is already being supported commercially. SysMLv2 presents a perfect opportunity for the community to galvanize around a common and well understood modeling paradigm. As adoption happens, we have a fresh opportunity to educate towards modeling approaches that are amenable to the vision of digital engineering.    

Analyzing Models

The reason to generate models is that they enable automated analysis. But analysis doesn’t come out of the box, and doesn’t come for free. Models exist at varying levels of fidelity. A baseline model should describe the parts of a system, their interfaces, and their connections. Most analysis capabilities require an extension of that baseline with meta-properties. For example, to do a Size, Weight and Power (SWAP) analysis, a model needs to annotate components properties such as dimensions, weight, and wattage requirements. One can imagine that all of the different analysis tools require different sets of meta-properties. Sometimes these meta-properties are shallow annotations. And sometimes these meta-properties are encoded into a rich domain specific language (DSL). To create a full-fidelity model to be analyzed by a broad set of tools may require learning multiple DSLs. For instance, in AADL there are at least three different languages used to specify behaviors: the AGREE contract language, the GUMBO contract language, and the AADL behavior annex.    

It goes without saying that the key to adoption here is to minimize the effort required in transitioning from a baseline model to an analyzable model. There are two main ingredients needed to get there. The first is that each analysis tool vendor can’t build their own annotation framework and DSL for specifying properties and behaviors. Modelers should be tasked with learning one specification framework. And the properties in that framework should facilitate as much reuse across different analysis capabilities as possible. Second, moving to higher and higher fidelity models is a complex process with many tradeoffs. As a community, we need to develop tooling that guides a user through this process and minimizes the learning curve required. Think of this as Clippy for systems engineering. Modelers should benefit from tool assisted recommendations, templates and predefined actions. All of these should be tailored to the particular objective of the modeler.     

Connecting Models to Code

Once a system design has been tuned and vetted, it’s time to implement that design in a set of hardware and software components. Used correctly, models serve as a blueprint for the devices and software that need to be developed. These blueprints enable more rapid software engineering through the parallelization that is enabled by tightly specified modular components. Unfortunately, however, in most cases code is typically written by hand and the connection with the model only remains in spirit. That is, none of the verification and validation activities that take place at the model level get translated down to the software and hardware. If models and implementations continue to remain in isolated worlds, we only scratch the surface of the overall efficiency that is possible to achieve. 

Analysis that happens at the model level should come with the ability to generate V&V artifacts that apply to software. For instance, consider a simple model of a thermostat controller that activates a heater when the temperature dips below a certain threshold. At the model level, the specification of this behavior enables validation of system wide properties that rely on the functionality of the thermostat. But these behavior specifications are also sufficient ingredients for generated software level test cases that ensure the implementation of the controller meets the assumptions that other system components rely on.  

Another mechanism for building a tight correspondence between model and implementation is through code generation. Tools like Ansys Scade enable certified code generation for numerical computations and real time components and tools like Tangram Flex enable the generation of interface libraries that facilitate rapidly connecting two components designed with different abstraction boundaries. There are also tools such as HAMR by Kansas State University that automatically generate code for threading and communication infrastructure that can be deployed to a variety of real-time operating systems and platforms. The challenge with these tools is that each services a slightly different section of the problem and each requires learning a different high-fidelity modeling paradigm and DSL. Again, the key to long term success is to reduce specification overhead through tiger integration and consolidation of code generation capabilities.    

Model Maintenance

One of the elephants in the room is that models need to be maintained over time. In the case of weapons platforms, the lifespan of a system can be multiple decades. During this time, both software and hardware may experience deep technical paradigm shifts, such as the integration of software components written in modern programming languages. The presence of models and a modular system architecture enables these updates to happen more rapidly. But there is a catch, the models themselves need to be updated and maintained along with system updates. Early tooling such as Taphos servicing this problem space is just beginning to emerge.  

Taphos is a tool for generating, synchronizing, and enriching models by analyzing the underlying code. Through a minimal user description of what defines a component boundary, Taphos can translate the underlying code into a high-fidelity model that can automatically group functions and data types into modular components with well-defined interfaces. If these high-fidelity can also be aligned towards the formats needed by code generators, we can begin to automate the update process. Legacy monolithic code can be lifted into modular component models through Taphos, and modern code can be generated directly from those artifacts.       

Putting it All Together

With the right tools, techniques, and methodologies in place, these ingredients can be combined to form a digital-first development process that allows for faster design and code iteration, improved security, reliability, safety, and performance, and lower costs over the long run. The highest degree of efficiency is gained in the limit – the more complex a system is, the more difficult design, maintenance and sustainment becomes. A complex cyber-physical system such as a military aircraft will be composed of thousands of interlocking puzzle pieces: machine parts and sensors, standards and norms, hardware and software. In an aircraft, the software alone will comprise hundreds, if not thousands of subcomponents implemented by dozens of distinct vendors and development teams.  All of these pieces, but particularly the software, will need to be continuously updated and improved over the system’s life cycle.

Using a digital-first approach, systems engineers can leverage SysMLv2 to create a model of such an aircraft, capturing the entire system design with a high-degree of precision. This would provide a blueprint for the build-out and allow for automated (and continuous) analysis of the design. This analysis can happen before a line of code is written or a piece of hardware is fabricated, enabling more rapid exploration through design space tradeoffs. With an optimized system blueprint in hand, code generators such as Ansys Scade and Tangram Flex can be used to automatically generate certifiable code for the required subsystems. Finally, in the months and years ahead, Taphos can be deployed to ensure the models and code remain synchronized. Ultimately this leads to greater agility of the platform, enabling faster and lower cost updates that are digitally designed, integrated and validated.

Each step yields remarkable results, yet is tightly coupled with the tools and techniques chosen in the other parts of the process. Rather than a high volume of ad-hoc manual effort (as is common with far too many complex systems engineering projects), our goal is to define this methodology and processes once, unlocking remarkable efficiency, quality, and cost savings for everyone over the long term.