Schrödinger’s CRCs (Fast Abstract)

Dr. Lee Pike’s short abstract revisiting fault-tolerance of cyclic redundancy checks (CRCs), expanding on the work of Driscoll et al.. It introduces the concepts of Schrödinger-Hamming weight and Schördinger-Hamming distance, and argues that under a fault model in which stuck-at-one-half or slightly-out-of-spec faults dominate, current methods for computing the fault detection of CRCs may be over-optimistic.

Read More

Concurrent Orchestration in Haskell

John Launchbury presented the Orc language for concurrent scripting at the Haskell Workshop, 2010 in Baltimore.

Concurrent Orchestration in Haskell
John Launchbury
Trevor Elliott

The talk slides are available in PDF or online.

We present a concurrent scripting language embedded in Haskell, emulating the functionality of the Orc orchestration language by providing many-valued (real) non-determinism in the context of concurrent effects. We provide many examples of its use, as well as a brief description of how we use the embedded Orc DSL in practice. We describe the abstraction layers of the implementation, and use the fact that we have a layered approach to establish and demonstrate algebraic properties satisfied by the combinators.

Read More

Copilot: a DSL for Monitoring Embedded Systems

Introducing Copilot

Can you write a list in Haskell? Then you can write embedded C code using Copilot. Here’s a Copilot program that computes the Fibonacci sequence (over Word 64s) and tests for even a numbers:

fib :: Streamsfib = do"fib" .= [0,1] ++ var "fib" + (drop 1 $ varW64 "fib")"t" .= even (var "fib")where even :: Spec Word64 -> Spec Booleven w = w `mod` const 2 == const 0

Copilot contains an interpreter, a compiler, and uses a model-checker to check the correctness of your program. The compiler generates constant time and constant space C code via Tom Hawkin’s Atom Language (thanks Tom!). Copilot is specifically developed to write embedded software monitors for more complex embedded systems, but it can be used to develop a variety of functional-style embedded code.Executing

> compile fib "fib" baseOpts

generates fib.c and fib.h (with a main() for simulation—other options change that). We can then run

> interpret fib 100 baseOpts

to check that the Copilot program does what we expect. Finally, if we have CBMC installed, we can run

> verify "fib.c"

to prove a bunch of memory safety properties of the generated program.Galois has open-sourced Copilot (BSD3 licence).  More information is available on the Copilot homepage.  Of course, it’s available from Hackage, too.

Flight of the Navigator

Aberdeen Farms entranceView of the James River.Pitot tube on the test aircraft.Our testbed stack: 4 STM32 microcontrollers (ARM Cortex M3s), an SD card for logging data, air pressure sensor, and voltage regulator.Sebastian installing the stack.Copilot took its maiden flight in August 2010 in Smithfield, Virginia. NASA rents a private airfield for test flights like this, but you have to get past the intimidating sign posted upon entering the airfield. However, once you arrive, there’s a beautiful view of the James River.We were flying on a RC aircraft that NASA Langley uses to conduct a variety of Integrated Vehicle Health Management (IVHM) experiments. (It coincidentally had Galois colors!)  Our experiments for Copilot were to determine its effectiveness at detecting faults in embedded guidance, navigation, and control software.  The test-bed we flew was a partially fault-tolerant pitot tube (air pressure) sensor.  Our pitot tube sat at the edge of the wing.  Pitot tubes are used on commercial aircraft and they’re a big deal: a number of aircraft accidents and mishaps have been due, in part, to pitot tube failures.Our experiment consisted of a beautiful hardware stack, crafted by Sebastian Niller of the Technische Universität Ilmenau.  Sebastian also led the programming for the stack.  The stack consisted of four STM32 ARM Cortex M3 microprocessors.  In addition, there was an SD card for writing flight data, and power management. The stack just fit into the hull of the aircraft. Sebastian installed our stack in front of another stack used by NASA on the same flights.The microprocessors were arranged to provide Byzantine fault-tolerance on the sensor values.  One microprocessor acted as the general, receiving inputs from the pitot tube and distributing those values to the other microprocessors.  The other microprocessors would exchange their values and perform a fault-tolerant vote on them.  Granted, the fault-tolerance was for demonstration purposes only: all the microprocessors ran off the same clock, and the sensor wasn’t replicated (we’re currently working on a fully fault-tolerant system). During the flight tests, we injected (in software) faults by having intermittently incorrect sensor values distributed to various nodes.The pitot sensor system (including the fault-tolerance code) is a hard real-time system, meaning events have to happen at predefined deadlines. We wrote it in a combination of Tom Hawkin’s Atom, a Haskell DSL that generates C, and C directly.Integrated with the pitot sensor system are Copilot-generated monitors. The monitors detected

  • unexpected sensor values (e.g., the delta change is too extreme),
  • the correctness of the voting algorithm (we used Boyer-Moore majority voting, which returns the majority only if one exists; our monitor checked whether a majority indeed exists), and
  • whether the majority votes agreed.

The monitors integrated with the sensor system without disrupting its real-time behavior.We gathered data on six flights.  In between flights, we’d get the data from the SD card.We took some time to pose with the aircraft. The Copilot team from left to right is Alwyn Goodloe, National Institute of Aerospace; Lee Pike, Galois, Inc.; Robin Morisset, École Normale Supérieure; and Sebastian Niller, Technische Universität Ilmenau. Robin and Sebastian are Visiting Scholars at the NIA for the project. Thanks for all the hard work!There were a bunch of folks involved in the flight test that day, and we got a group photo with everyone. We are very thankful that the researchers at NASA were gracious enough to give us their time and resources to fly our experiments. Thank you!Finally, here are two short videos. The first is of our aircraft’s takeoff during one of the flights. Interestingly, it has an electric engine to reduce the engine vibration’s effects on experiments.

Untitled from Galois Video on Vimeo.

The second is of AirStar, which we weren’t invo
lved in, but that also flew the same day. AirStar is a scaled-down jet (yes, jet) aircraft that was really loud and really fast. I’m posting its takeoff, since it’s just so cool. That thing was a rocket!

Untitled from Galois Video on Vimeo.

More Details

Copilot and the flight test is part of a NASA-sponsored project (NASA press-release) led by Lee Pike at Galois.  It’s a 3 year project, and we’re currently in the second year.

Even More Details

Besides the language and flight test, we’ve written a few papers:

  • Lee Pike, Alwyn Goodloe, Robin Morisset, and Sebastian Niller. Copilot: A Hard Real-Time Runtime Monitor. To appear in the proceedings of the 1st Intl. Conference on Runtime Verification (RV’2010), 2010. Springer.

This paper describes the Copilot language.

Byzantine faults are fascinating.  Here’s a 2-page paper that shows one reason why.

At the beginning of our work, we tried to survey prior results in the field and discuss the constraints of the problem.  This report is a bit lengthy (almost 50 pages), but it’s a gentle introduction to our problem space.

Yes, QuickCheck can be used to test low-level protocols.

A short paper motivating the need for runtime monitoring of critical embedded systems.

You’re Still Interested?

We’re always looking for collaborators, users, and we may need 1-2 visiting scholars interested in embedded systems & Haskell next summer.  If any of these interest you, drop Lee Pike a note (hint: if you read any of the papers or download Copilot, you can find my email).

Read More

Cabal and Licenses

When writing software that uses open source libraries, the license of a dependency is a real concern. It becomes necessary to watch for license compatibility, as well as ensure that the terms of the license are satisfied when doing a source distribution.As a first attempt at license compatibility checking, we have added some extra checks in the configure step for Cabal, so that warnings will be generated if any direct dependencies have licenses that conflict with that of the configured package [1]. However, there are some limitations to this approach. First, this doesn’t rule out the possibility that a dependency of a package that is deemed compatible will conflict with the license of your executable. Second, it doesn’t allow for a BSD3 library to depend on a GPL library, where the conflict is only produced when an executable is produced from the combination.Some licenses place requirements on how a source or binary distribution can happen. For example, the BSD3 license requires you to include its copyright notice in any distribution you make.In order to speed up this process, we decided to try to extract this information from the GHC package database. According to the GHC manual, the license file should be tracked by the package database, though upon closer inspection, that field doesn’t seem to be tracked. In order to address this shortcoming, we developed a small patch to add this functionality in GHC and Cabal [2]. Using this patched version of GHC and Cabal, we’ve developed a small tool to walk the dependencies of a cabal package, and collect the license files that they have registered in the package database.Both of these patches are available for you to try out, attached to the tickets below, though they have been accepted and are expected to make it into the next release of Cabal.[1] http://hackage.haskell.org/trac/hackage/ticket/481[2] http://hackage.haskell.org/trac/hackage/ticket/710

Read More

White Paper: High Assurance Software Development

Galois is pleased to announce a new white paper entitled High Assurance Software Development, written by David Burke, Joe Hurd and Aaron Tomb.

The purpose of this paper is describe how to make software assurance a part of a science of security. Software assurance as practiced is a grab-bag of techniques, heuristics, and lessons learned from earlier failures. Given the importance of software to critical infrastructures (electricity, banking, medicine), this is an untenable situation; the smooth functioning of our society depends on this software, and we need a more rigorous foundation for assessments about the trustworthiness of these systems.

In this paper we present an evidence-based approach to high assurance, in which diverse development teams can communicate in a common language to tackle the challenges of developing secure systems. Furthermore, this framework supports formal inference techniques (in particular, a trust relationship analysis), so that we can use automated reasoning to deal with scalability issues. Perhaps most importantly, an evidence-based approach lets us tailor the tools that we bring to bear on each claim: formal methods: testing; configuration management; and so forth all have their place in an assurance argument. In the end, it’s all about deploying systems where the residual risk has been minimized, given finite resources and time. Understanding this and managing it effectively is what the science of security is all about.

Read More

Galois, Inc. Wins Three Department of Energy Small Business Research Awards

Galois, Inc., a Portland, Oregon computer science R&D company, has been awarded two 2010 Phase I SBIR research awards, and one 2010 Phase 2 award from the US Department of Energy Office of Advanced Scientific Computing Research, to conduct research into high performance computing infrastructure.

A Deployable, Robust File System for Parallel I/O

When considering high-performance parallel computers, it is easy to overlook the importance of disk storage. In this research, Galois will address the topic of disk storage for parallel computers, and create a deployable, robust file system that will reduce downtime due to faults and increase productivity through improved system performance. Galois’ will take a synthesis approach, combining several strands of existing research on the subject of file systems and transitioning it into a robust, fully-featured product. In doing so, we will utilize modern formal methods research, in the form of model checking, to validate our design and improve the reliability of our implementation. The benefits of this research will be to improve the efficiency and decrease the cost of large, parallel file systems. This work will be applicable to Department of Energy laboratories, as well as to commercial users of massive parallel or distributed storage, such as online storage and backup providers or grid storage providers.This project builds on Galois’ experience with industrial model checking, and our prior work on file system design and implementation via formal methods.

Improved Symbol Resolution for Portable Build Systems

Modern High Performance Computing utilizes a variety of different hardware and software platforms. These differences make it difficult to develop reusable components, which leads to a significant decrease of productivity. This project will investigate the design of portable build systems that are simple, yet sufficiently robust with respect to symbol resolution, so that they are able to adapt and build software across many different platforms. This project will result in increased productivity for software developers who design portable software components. In particular, we anticipate significant benefits in the area of High Performance Computing, where the multitude of different hardware and software platforms make the problem of reusing software particularly acute.This work takes advantage of Galois’ background in domain specific language design, and build systems, in particular, Cabal, and other system configuration software.

Collaboration and Sharing on the Grid

The goal of the “Grid 2.0” project is to improve the ability of a distributed team of researchers to collaborate on research using grid middleware computing infrastructure. In Phase I of this project, we developed a prototype integration of a typical collaboration-oriented web application with an open source data grid middleware system, establishing that such integration is feasible. In Phase II, we directly address the weakest point for collaboration applications on grid systems: open, standardized protocols for identity management, authorization, and delegation on the grid, via a federated identity management system providing support for software authorization and delegation on the Open Science Grid. The intent is to provide a secure, open, and flexible identity management system for use on grid infrastructure projects, portable to other grid middleware systems, and interoperable with existing identity management schemes. The open source results of the research will form the basis for applications of identity management systems in commercial cloud and grid systems.This project builds on Galois’ experience with cross-domain collaboration tools and secure identity management systems (including OpenID, OAuth, SAML and X.509) developed for several clients over the past decade.For more information about these projects, contact Don Stewart (dons@galois.com).

Read More

Orc in Haskell, now on Hackage

Orc is a concurrent scripting language, now available as an embedded DSL in Haskell. I like to think of Orc as the combination of three things: many-valued concurrency, external actions (effects), and managed resources, all packaged in a high-level set of abstractions that feel more like scripting rather than programming. It provides a very flexible way to manage multiple concurrent actions, like querying remote web sites, along with timeouts and default actions.Source code is available on Hackage; the easiest way to access it is with cabal (i.e. ‘cabal install orc’).Also available is a draft paper entitled Concurrent Orchestration in Haskell which explains how to use Orc, as well as describing the implementation in detail.Feedback welcome. Enjoy!

Read More

Galois Tech Talks, now on Vimeo!

For a number of years, Galois Inc. has been organizing technical seminars presented by visiting researchers, Galois engineers, and members of the vibrant Portland technical community. The seminars span a wide variety of topics, ranging from functional programming, formal methods, and compiler and language design, to cryptography, and operating system construction. The talks are free and open to the interested public. Announcements of upcoming talks are posted to this blog about a week in advance.Over the last few months we have received a number of requests to share videos of the talks with the wider community. As a result, we are very pleased to announce the Galois tech talk channel on Vimeo. Recent Galois talks should become available over the next few weeks, followed by future presentations.Enjoy!Galois tech talk channel: http://vimeo.com/channels/galois

Read More

Solving n-Queens in Cryptol

The eight queens puzzle asks how to place eight queens on a chess board such that none of them attacks any other. The problem easily generalizes to n queens, using an nxn board. In this post, we’ll see how to solve the n-Queens puzzle in Cryptol, without lifting a finger!

Representing the board

It is easy to see that any solution to the n-Queens puzzle will have precisely one queen on each row and column of the board. Therefore, we can represent the solution as a sequence of n row numbers, corresponding to subsequent columns. For instance, the board pictured below (taken from the wikipedia page), can be represented by the sequence3  7  0  2  5  1  6  4
recording the row of the queen appearing in each consecutive column, starting from the top-left position. (And yes, we always count from 0!)

Read More

POPL 2010, Day 1

Here are some talks that caught my attention on the opening day of POPL 2010.Reconfigurable Asynchronous Logic Automata, Neil GershenfeldIn this invited talk, Neil proposed that we throw out the sequential model of computation from Turing and Von Neumann, and re-imagine computing from the ground up, more along the line of physics as it is manifested in the world. This might be the way forward given the huge power demands that will arise from traditional architectures as they become more and more powerful. Rather than imposing a top-down structure on the machine, Neil propose we consider systems built up from cellular automata. In ALA each cell is able to do a basic logic operation or, in RALA, reconfigure other cells. Built around 2D grids, and computing in parallel without any global clock, the automata are able to compute many basic functions in linear time, including multiplication and sorting (of course, they also take a linear number of cells to do so).What I liked about this talk is that it called on me to think more broadly about the nature of computation. I don’t think I buy the line that how we do computing today is wrong. But I do believe that computing and information theory is going to become more and more important in the development of many other subjects, including physics and biology. I also believe that we will need to harness many more structures to do computing rather than simply relying on gates burned in silicon. Coincidentally, at lunch today Luca Cardelli was telling me about his explorations in computing with DNA. Fascinating stuff!At the lowest levels, the primitive components of computation reflect the computational substrate, so they are very different from one instantiation to another. As primitive computational elements become combined, however, the different descriptions become more abstract, and start to look more and more alike. The same mathematical operations may arise in each form, for example. That makes me wonder what the space of truly computationally-neutral specifications looks like: how might we program in such a way that we really don’t have a preference for what substrate we map the computation onto. We already consider FPGAs, GPUs and CPUs as standard targets for languages like Cryptol. Let’s conceptually add DNA and RALA to the list, and see if that would have us change anything.A Simple, Verified Validator for Software Pipelining, Jean-Baptiste Tristan & Xavier LeroySoftware pipelining is a compiler transformation that reorders loop code to make more efficient use of the CPU and memory accesses. Code from many different iterations of the loop might be drawn together and overlapped to execute out of order, and perhaps in parallel. The question is how to verify that the pipelining has been done correctly.The paper proposes using symbolic evaluation. It describes an equivalence principle which requires proving two separate commutativity properties. These may both be shown by symbolic evaluation of the instruction code, being careful to handle the small differences (such as different temporary variables in the two paths). Overall the process is simple enough that it could be a standard component of any compiler, even if the compiler is non-verifying overall.A Verified Compiler for an Impure Functional Language, Adam ChlipalaHandling object-level variables in proofs is a real pain, one that is hard to eliminate. This paper has an interesting way of dealing with them. It describes a Coq verification of a compiler for Untyped Mini-ML with references and exceptions, compiling to an idealized assembly code.Now, using explicit object variables leads to lots of lemmas about substitution. On the other hand, full higher-order abstract syntax (HOAS) would lead quickly to non-termination, and hence unsoundness. So instead, the paper introduces a closure semantics in which variables are just numbers pointing into a heap. Many operations that would reorder substitutions (and hence require big proofs) don’t affect the closure heap, so the proofs go through easily.The paper also uses a high degree of proof automation: it has a generic proof script that looks at the hypotheses and goals, and decides what approaches to try. The proof script is a bit slow because the evaluation mechanism of Coq wasn’t designed for this particularly (but see my blog post from PADL about accelerating normalization). However, because the proof is highly automatic, it is not hard to evolve the compiler and proof simultaneously. In fact, the compiler and proof were developed incrementally in exactly this way.Verified Just-in-Time Compiler on x86, Magnus MyreenJIT compiling has become standard for accelerating web pages, amongst other things. Good JIT compilers can take into account the input to the program as well as the program itself. But it is hard to get them right, as witnessed apparently by a recent Firefox bug.An aspect that makes verification tricky is that code and data are mixed: a JIT compiler essentially produces self-modifying code. The paper provides a programming logic for self-modifying code by adapting Hoare triples so that there is both “before” and “after” code. Some practical issues also need to be handled: for example, the frame-property has subtleties regarding the instruction cache, and code-updates can sometime happen too recently to show up in the current instruction stream. The x86 documentation is a little vague about how soon code-modifications can be relied on to be present.The end result is a verified JIT compiler from a toy input byte-code language, but mapping to a realistic semantics for x86. The compiler produces comparable code quality to C for a GCD example.

Read More