Cabal and Licenses

When writing software that uses open source libraries, the license of a dependency is a real concern. It becomes necessary to watch for license compatibility, as well as ensure that the terms of the license are satisfied when doing a source distribution.As a first attempt at license compatibility checking, we have added some extra checks in the configure step for Cabal, so that warnings will be generated if any direct dependencies have licenses that conflict with that of the configured package [1]. However, there are some limitations to this approach. First, this doesn’t rule out the possibility that a dependency of a package that is deemed compatible will conflict with the license of your executable. Second, it doesn’t allow for a BSD3 library to depend on a GPL library, where the conflict is only produced when an executable is produced from the combination.Some licenses place requirements on how a source or binary distribution can happen. For example, the BSD3 license requires you to include its copyright notice in any distribution you make.In order to speed up this process, we decided to try to extract this information from the GHC package database. According to the GHC manual, the license file should be tracked by the package database, though upon closer inspection, that field doesn’t seem to be tracked. In order to address this shortcoming, we developed a small patch to add this functionality in GHC and Cabal [2]. Using this patched version of GHC and Cabal, we’ve developed a small tool to walk the dependencies of a cabal package, and collect the license files that they have registered in the package database.Both of these patches are available for you to try out, attached to the tickets below, though they have been accepted and are expected to make it into the next release of Cabal.[1] http://hackage.haskell.org/trac/hackage/ticket/481[2] http://hackage.haskell.org/trac/hackage/ticket/710

Read More

White Paper: High Assurance Software Development

Galois is pleased to announce a new white paper entitled High Assurance Software Development, written by David Burke, Joe Hurd and Aaron Tomb.

The purpose of this paper is describe how to make software assurance a part of a science of security. Software assurance as practiced is a grab-bag of techniques, heuristics, and lessons learned from earlier failures. Given the importance of software to critical infrastructures (electricity, banking, medicine), this is an untenable situation; the smooth functioning of our society depends on this software, and we need a more rigorous foundation for assessments about the trustworthiness of these systems.

In this paper we present an evidence-based approach to high assurance, in which diverse development teams can communicate in a common language to tackle the challenges of developing secure systems. Furthermore, this framework supports formal inference techniques (in particular, a trust relationship analysis), so that we can use automated reasoning to deal with scalability issues. Perhaps most importantly, an evidence-based approach lets us tailor the tools that we bring to bear on each claim: formal methods: testing; configuration management; and so forth all have their place in an assurance argument. In the end, it’s all about deploying systems where the residual risk has been minimized, given finite resources and time. Understanding this and managing it effectively is what the science of security is all about.

Read More

Galois, Inc. Wins Three Department of Energy Small Business Research Awards

Galois, Inc., a Portland, Oregon computer science R&D company, has been awarded two 2010 Phase I SBIR research awards, and one 2010 Phase 2 award from the US Department of Energy Office of Advanced Scientific Computing Research, to conduct research into high performance computing infrastructure.

A Deployable, Robust File System for Parallel I/O

When considering high-performance parallel computers, it is easy to overlook the importance of disk storage. In this research, Galois will address the topic of disk storage for parallel computers, and create a deployable, robust file system that will reduce downtime due to faults and increase productivity through improved system performance. Galois’ will take a synthesis approach, combining several strands of existing research on the subject of file systems and transitioning it into a robust, fully-featured product. In doing so, we will utilize modern formal methods research, in the form of model checking, to validate our design and improve the reliability of our implementation. The benefits of this research will be to improve the efficiency and decrease the cost of large, parallel file systems. This work will be applicable to Department of Energy laboratories, as well as to commercial users of massive parallel or distributed storage, such as online storage and backup providers or grid storage providers.This project builds on Galois’ experience with industrial model checking, and our prior work on file system design and implementation via formal methods.

Improved Symbol Resolution for Portable Build Systems

Modern High Performance Computing utilizes a variety of different hardware and software platforms. These differences make it difficult to develop reusable components, which leads to a significant decrease of productivity. This project will investigate the design of portable build systems that are simple, yet sufficiently robust with respect to symbol resolution, so that they are able to adapt and build software across many different platforms. This project will result in increased productivity for software developers who design portable software components. In particular, we anticipate significant benefits in the area of High Performance Computing, where the multitude of different hardware and software platforms make the problem of reusing software particularly acute.This work takes advantage of Galois’ background in domain specific language design, and build systems, in particular, Cabal, and other system configuration software.

Collaboration and Sharing on the Grid

The goal of the “Grid 2.0” project is to improve the ability of a distributed team of researchers to collaborate on research using grid middleware computing infrastructure. In Phase I of this project, we developed a prototype integration of a typical collaboration-oriented web application with an open source data grid middleware system, establishing that such integration is feasible. In Phase II, we directly address the weakest point for collaboration applications on grid systems: open, standardized protocols for identity management, authorization, and delegation on the grid, via a federated identity management system providing support for software authorization and delegation on the Open Science Grid. The intent is to provide a secure, open, and flexible identity management system for use on grid infrastructure projects, portable to other grid middleware systems, and interoperable with existing identity management schemes. The open source results of the research will form the basis for applications of identity management systems in commercial cloud and grid systems.This project builds on Galois’ experience with cross-domain collaboration tools and secure identity management systems (including OpenID, OAuth, SAML and X.509) developed for several clients over the past decade.For more information about these projects, contact Don Stewart (dons@galois.com).

Read More

Orc in Haskell, now on Hackage

Orc is a concurrent scripting language, now available as an embedded DSL in Haskell. I like to think of Orc as the combination of three things: many-valued concurrency, external actions (effects), and managed resources, all packaged in a high-level set of abstractions that feel more like scripting rather than programming. It provides a very flexible way to manage multiple concurrent actions, like querying remote web sites, along with timeouts and default actions.Source code is available on Hackage; the easiest way to access it is with cabal (i.e. ‘cabal install orc’).Also available is a draft paper entitled Concurrent Orchestration in Haskell which explains how to use Orc, as well as describing the implementation in detail.Feedback welcome. Enjoy!

Read More

Galois Tech Talks, now on Vimeo!

For a number of years, Galois Inc. has been organizing technical seminars presented by visiting researchers, Galois engineers, and members of the vibrant Portland technical community. The seminars span a wide variety of topics, ranging from functional programming, formal methods, and compiler and language design, to cryptography, and operating system construction. The talks are free and open to the interested public. Announcements of upcoming talks are posted to this blog about a week in advance.Over the last few months we have received a number of requests to share videos of the talks with the wider community. As a result, we are very pleased to announce the Galois tech talk channel on Vimeo. Recent Galois talks should become available over the next few weeks, followed by future presentations.Enjoy!Galois tech talk channel: http://vimeo.com/channels/galois

Read More

Solving n-Queens in Cryptol

The eight queens puzzle asks how to place eight queens on a chess board such that none of them attacks any other. The problem easily generalizes to n queens, using an nxn board. In this post, we’ll see how to solve the n-Queens puzzle in Cryptol, without lifting a finger!

Representing the board

It is easy to see that any solution to the n-Queens puzzle will have precisely one queen on each row and column of the board. Therefore, we can represent the solution as a sequence of n row numbers, corresponding to subsequent columns. For instance, the board pictured below (taken from the wikipedia page), can be represented by the sequence3  7  0  2  5  1  6  4
recording the row of the queen appearing in each consecutive column, starting from the top-left position. (And yes, we always count from 0!)

Read More

POPL 2010, Day 1

Here are some talks that caught my attention on the opening day of POPL 2010.Reconfigurable Asynchronous Logic Automata, Neil GershenfeldIn this invited talk, Neil proposed that we throw out the sequential model of computation from Turing and Von Neumann, and re-imagine computing from the ground up, more along the line of physics as it is manifested in the world. This might be the way forward given the huge power demands that will arise from traditional architectures as they become more and more powerful. Rather than imposing a top-down structure on the machine, Neil propose we consider systems built up from cellular automata. In ALA each cell is able to do a basic logic operation or, in RALA, reconfigure other cells. Built around 2D grids, and computing in parallel without any global clock, the automata are able to compute many basic functions in linear time, including multiplication and sorting (of course, they also take a linear number of cells to do so).What I liked about this talk is that it called on me to think more broadly about the nature of computation. I don’t think I buy the line that how we do computing today is wrong. But I do believe that computing and information theory is going to become more and more important in the development of many other subjects, including physics and biology. I also believe that we will need to harness many more structures to do computing rather than simply relying on gates burned in silicon. Coincidentally, at lunch today Luca Cardelli was telling me about his explorations in computing with DNA. Fascinating stuff!At the lowest levels, the primitive components of computation reflect the computational substrate, so they are very different from one instantiation to another. As primitive computational elements become combined, however, the different descriptions become more abstract, and start to look more and more alike. The same mathematical operations may arise in each form, for example. That makes me wonder what the space of truly computationally-neutral specifications looks like: how might we program in such a way that we really don’t have a preference for what substrate we map the computation onto. We already consider FPGAs, GPUs and CPUs as standard targets for languages like Cryptol. Let’s conceptually add DNA and RALA to the list, and see if that would have us change anything.A Simple, Verified Validator for Software Pipelining, Jean-Baptiste Tristan & Xavier LeroySoftware pipelining is a compiler transformation that reorders loop code to make more efficient use of the CPU and memory accesses. Code from many different iterations of the loop might be drawn together and overlapped to execute out of order, and perhaps in parallel. The question is how to verify that the pipelining has been done correctly.The paper proposes using symbolic evaluation. It describes an equivalence principle which requires proving two separate commutativity properties. These may both be shown by symbolic evaluation of the instruction code, being careful to handle the small differences (such as different temporary variables in the two paths). Overall the process is simple enough that it could be a standard component of any compiler, even if the compiler is non-verifying overall.A Verified Compiler for an Impure Functional Language, Adam ChlipalaHandling object-level variables in proofs is a real pain, one that is hard to eliminate. This paper has an interesting way of dealing with them. It describes a Coq verification of a compiler for Untyped Mini-ML with references and exceptions, compiling to an idealized assembly code.Now, using explicit object variables leads to lots of lemmas about substitution. On the other hand, full higher-order abstract syntax (HOAS) would lead quickly to non-termination, and hence unsoundness. So instead, the paper introduces a closure semantics in which variables are just numbers pointing into a heap. Many operations that would reorder substitutions (and hence require big proofs) don’t affect the closure heap, so the proofs go through easily.The paper also uses a high degree of proof automation: it has a generic proof script that looks at the hypotheses and goals, and decides what approaches to try. The proof script is a bit slow because the evaluation mechanism of Coq wasn’t designed for this particularly (but see my blog post from PADL about accelerating normalization). However, because the proof is highly automatic, it is not hard to evolve the compiler and proof simultaneously. In fact, the compiler and proof were developed incrementally in exactly this way.Verified Just-in-Time Compiler on x86, Magnus MyreenJIT compiling has become standard for accelerating web pages, amongst other things. Good JIT compilers can take into account the input to the program as well as the program itself. But it is hard to get them right, as witnessed apparently by a recent Firefox bug.An aspect that makes verification tricky is that code and data are mixed: a JIT compiler essentially produces self-modifying code. The paper provides a programming logic for self-modifying code by adapting Hoare triples so that there is both “before” and “after” code. Some practical issues also need to be handled: for example, the frame-property has subtleties regarding the instruction cache, and code-updates can sometime happen too recently to show up in the current instruction stream. The x86 documentation is a little vague about how soon code-modifications can be relied on to be present.The end result is a verified JIT compiler from a toy input byte-code language, but mapping to a realistic semantics for x86. The compiler produces comparable code quality to C for a GCD example.

Read More

PADL/PEPM 2010, Day 2

Again, here are a few talks that caught my attention and I thought I would share my impressions. Enjoy!An Ode to Arrows, Hai Liu & Paul Hudak (PADL 2010)Representing the infinite tower of derivatives of a function as a lazy list has become popular in the Haskell world, the most recent example being Conal Elliott’s paper on Beautiful Differentiation at ICFP 2009. This work explores taking the same ideas and using them to express systems of Ordinary Differential Equations.A natural translation of these ideas into Haskell requires recasting the ODEs in an integral form, rather than a purely differential form, leading to a shallow embedding of the equations as recursive equations in Haskell. Unfortunately, this simple translation leads to quadratic computational complexity, in both time and space. Memoization can address this moderately well, except that one is left with the problems of managing (and freeing) the memo-tables.Moving to arrows leads to a better solution. The recursion is expressed using Paterson’s ‘loop’ construct, which puts it in the control of the programmer. When combined with Causal Commutative Arrows (http://lambda-the-ultimate.org/node/3659) the circuits can be optimized into tight iterative loops, with dramatic improvements in performance.A DSL Approach to Protocol Stack Implementation, Yan Wang & Veronica Gaspes (PADL 2010)Implementations of network code are very hard to connect with the specifications, leading to major challenges for maintenance. This is significant for embedded devices which need implementations that appropriately trade off power, speed, and memory usage. To that end, IPS is a DSL for generating network code. It allows packet formats to be defined using dependencies between fields, handles the interaction of protocol and packet, and provides a framework for constructing the protocol stacks.In a comparison exercise, IPS was used to generate code for the Rime protocol stack. Compared with the published hand-written version in C, the IPS code was 1/4 of the size of the C, but produced an executable 25% larger. The performance on sending packets was 10% worse in the IPS version, but both versions were equivalent for receiving packets.No measurements were given for ease of maintenance or design exploration between the two versions, but I know which one I would prefer to have to work with…I/O Guided Detection of List Catamorphisms, Martin Hofmann & Emanuel Kitzelmann (PEPM 2010)Having the computer write its own programs has always been appealing. You provide some input/output pairs and the machine comes up with a generalization expressed as a program that has the same behavior. Broadly, there are two ways to do this: analytical (where the structure of the data is examined to build the program), and generate & test (where many programs are produced, and their behavior improved, perhaps by evolutionary combinations).This work uses the idea of catamorphisms (also known an fold or reduce functions) as a template to explore, when the i/o pairs comes from a recursive data type (just lists at the moment, I think). Using the fact that if a function g satisfies ‘g [] = v’ and ‘g (x:xs) = f x (g xs)’ for some ‘f’ and ‘v’, then ‘g = fold f v’, the system figures out what the i/o pairs for the sub-functions would be, and then tries to deduce the definitions. It includes specific smarts that notice when the form of the function corresponds to a ‘map’ or ‘filter’.Bridging the Gap Between Symbolic and Efficient AES Implementation, Andrew Moss & Dan Page (PEPM 2010)Different implementations of AES, the standard for block encryption, run at very different speeds. The original reference implementation takes about 840 cycles per round, the Open SSL implementation takes 30, and a top performing assembly code version takes 21.6 cycles per round. The CAO DSL and compiler described in this paper takes a high-level spec and compiles it, producing code that takes 23 cycles per round. Impressive.CAO is a sequential language, designed to be easily related to the pseudo code that crypto specifications often use. A significant amount of effort in the compiler then goes into deducing what parallelism is possible and how best to balance the work. CAO has matrices as first class objects, to allow the compiler opportunity to deduce good representations. There are also optimizations to convert references from the matrices in their entirety to references to the appropriate sub-portions, so as to eliminate unnecessary additional sequential dependencies. The compiler also has some optimizations that are specific to AES.Clone Detection and Elimination for Haskell, Christopher Brown & Simon Thompson (PEPM 2010)I liked this talk partly because it reminded me to go and play with HaRe (https://www.cs.kent.ac.uk/projects/refactor-fp/hare.html). The Haskell Refactorer is an Emacs and Vim tool for manipulating Haskell 98 programs, providing structural transformation (like fold/unfold), altering data types (like adding or eliminating arguments), slicing, and now, clone elimination (cue Star Wars music).The clone eliminator finds repeated code forms above a threshold size, and deduces the least-general generalization of the code to invent new functions if necessary. Clone detection is entirely automatic, but it is up to the programmer to decide which ones to act upon — sometimes adding abstraction helps the structure of a program, and sometimes it obscures it. It works within monadic code too.HaRe works mostly at the AST level, though it does use the raw token stream for some of its discovery. It also works across modules.Making “Stricterness” More Relevant, Stefan Holdermans & Jurriaan Hage (PEPM 2010)The old approaches to strictness were all denotational, but to actually use strictness within the compiler it is better to have a syntax-driven formulation, i.e. in logic. There is a nice approach which uses the concept of ‘relevance’, where ‘x’ is relevant to ‘e’ if every evaluation of ‘e’ demands ‘x’. Clearly, relevance is equivalent to strictness.But actually it doesn’t work so well in the presence of ‘seq’. The original formulation of relevance assumed that the only time a function would be evaluated was at its point of application. Then there is no way to distinguish ‘undefined’ from ‘x->undefined’. ‘Seq’ changes all that. The paper shows how to fix relevance by tracking which lambdas are used applicatively (i.e. applied somewhere in the program in a place that will be evaluated). The definition is circular, but the least solution is the one intended.

Read More

PADL/PEPM 2010, Day 1

Here are some papers which caught my imagination at this year’s PADL and PEPM conferences in Madrid.O Partial Evaluator, Where art thou? Lennart Augustsson (PEPM 2010)In this invited talk, Lennart walked us through his personal engagement with PE-like code transformations that he has confronted directly in his career, which has consisted of Haskell programming in a wide variety of commercial and industrial settings. The examples included optimizing airline scheduling code for Lufthansa, eliminating unnecessary hardware elements in Bluespec compilation, and generating efficient Monte-Carlo computations in investment banking.In Bluespec, Lennart needed to introduce a ‘tabulate’ function which produces a table-driven implementation of its function argument. It is a powerful idea, but it requires a sufficiently powerful set of transformations for its boilerplate to get simplified away. In this case, Lennart needed to introduce a term ‘_’ which means “don’t care”, i.e. any convenient result can be returned. During unfolding and partial evaluation, ‘_’ could be replaced with any term that would help the partial evaluator make good progress on simplification. Semantically, this is a form of refinement of non-determinism performed during code transformation (in fact, I’m still not sure what the correct denotational approach to its semantics would be). Lennart then explored using ‘tabulate’ directly in Haskell, but found it next to impossible to get the GHC simplifier to do enough transformation.The Paradise ESDL is like Bluespec in that it uses the Haskell framework to bolt together primitive elements from a lower level language. In this case, the efficiency opportunities arise from, say, 18-argument functions being called repeatedly with 15 known arguments. Lennart introduced a ‘compile’ function which is semantically the identity function, but which invokes a run-time compilation. Yet again, Lennart found himself writing essentially the same kind of code he had previously written in a different setting. Each time it is fairly simple partial evaluation.Where oh where is the partial evaluator for Haskell? And don’t worry about self-application or anything fancy like that. Just something that is solid and reliable.Explicitly Typed Exceptions for Haskell, Jose Iborra (PADL 2010)Java gives you safe exceptions, in that if you don’t handle an exception that could be raised, the compiler will complain. Currently Haskell does not. This paper builds on Simon Marlow’s work of providing an exception library with existentially-quantified exceptions, that provide access to the exception through class methods. Marlow’s exceptions work very nicely with the type system, allowing exception and non-exception code to be intermingled. If only there was a way still to know which particular exceptions could be raised …Iborra to the rescue! He adapts Marlow’s exceptions with additional class constraints that track exactly which exception could be raised, fulfilling those constraints when a handler for the exception is provided. Unhandled exceptions show up as missing instances (of handlers), pinpointing the code that is at fault. It all works very beautifully. The downside? He needs to use overlapping instances to be able to ‘subtract’ that constraints. Hmmm. Normally I’m very down on overlapping instances: it seems really fragile to rely on details of the type-inference engine to select what code should run. But this is a particularly benign case: here the overlapping instances are over phantom type variables, so the same code will run in either case. Maybe Haskell should allow that in general; overlapping instances are fine if the same run-time code is produced however the instances are resolved.Conversion by Evaluation, Mathieu Boespflug (PADL 2010)Proof tactics in theorem provers often require large lambda terms to be normalized. For example some Coq tactics perform reflective computation over other Coq terms. This reduction can be quite inefficient when done explicitly, because of the interpretive overhead, yet the result of evaluation needs to be in a form where the final lambda term can be examined.This paper shows the how to interpret explicit lambda terms efficiently by mapping them into a underlying evaluation mechanism of the host language (e.g. Haskell). The approach is deceptively simple and consists of a set of straightforward optimizations (higher-order abstract syntax, currying, de Bruijn indices etc). Astonishingly, the result is that the evaluation incurs only a 10-30% overhead compared with using the underlying evaluator directly! A must-read for anyone interested in reflective evaluation.Optimizing Generics Is Easy! Jose Pedro Magalhaes et al (PEPM 2010)Generic programming is a way to provide a programmable version of the deriving concept that arises in Haskell. The problem is that the current libraries for implementing generics introduce performance overheads from 50% to 16x compared with writing the original by hand. This paper presents a framework for doing generics using type families to represent the components of algebraic datatypes, together with overloaded operators for mapping between the generic datatype and any user datatype.GHC does a good job of rewriting much of the overhead away, but needs to be encouraged to do the best it can. There are GHC flags to control decision thresholds for inlining etc that turn out to enable GHC to completely remove the overhead of the generic code, producing in some cases exactly the code that would get written by hand. The open problem is that increasing the threshold over the whole program might have negative effects elsewhere in the compilation, so a mechanism for localizing the effect of increased thresholds would be valuable.

Read More

GHC Nominated for Programming Language Award

ACM SIGPLAN recently announced a new award:

The SIGPLAN Programming Languages Software Award is awarded to an institution or individual(s) to recognize the development a software system that has had a significant impact on programming language research, implementations, and tools. The impact may be reflected in the wide-spread adoption of the system or its underlying concepts by the wider programming language community either in research projects, in the open-source community, or commercially. The award includes a prize of $2,500.

I think that GHC (Glasgow Haskell Compiler) and the two Simons (Peyton Jones and Marlow) are prime candidates for this. So, being careful to stay within the 500 word limit, I submitted a nomination statement for GHC as follows:

For the last decade, Haskell has been at the center of the most exciting innovations within programming languages, with work on software transactional memory, generalized algebraic data types, rank-N polymorphism, monads, multi-parameter type classes, embedded domain-specific languages, property-based testing, data parallelism, thread-profiling, and so on, generating hundreds of research papers from many diverse research groups.GHC, the Glasgow Haskell Compiler, is the vehicle that made this research possible.It is hard to explore radical ideas on real systems, yet the GHC team created a flexible platform that allows other researchers to explore the implications of their ideas and to test whether they really work in the large. From the first beta release in 1991, GHC emphasized collaboration and open “bazaar” style development, as opposed to the “cathedral” development of most of its contemporaries. GHC was open source even before Linux made open source cool. GHC has continued in the same vein, now listing over 60 contributors to the codebase.In those early days, efficient compilation of a higher-order, allocation-rich, lazy functional language seemed to be a pipe dream. Yet GHC has risen to be a top-flight performer in the online language performance shootout (shootout.alioth.debian.org), comparable with Java Server-6, and approaching native C in performance overall. This is a tribute to the incredible amount of profound optimization built into the compiler, with techniques like cross-module code migration, unboxed data types, and automated removal of intermediate data structures, all done through correctness-preserving transformations that exploit the algebraic simplicity of Haskell terms, even in the presence of monadic effects.The impact GHC has had on programming language research would be sufficient to merit an award by itself, but GHC is having a corresponding influence in industry. By showing the feasibility of purely functional, statically-typed programming in the large, GHC Haskell has also had clear influence on many of the newest generation of languages, such as C#, F#, Java Generics, LINQ, Perl 6, Python, and Visual Basic 9.0. As Soma Somasegar, Microsoft Developer Division Chief, said in 2007, “One of the important themes in programming languages over recent years has been a move to embrace ideas from functional programming, [which] are helping us address some of the biggest challenges facing the industry today, from the impedance mismatch between data and objects to the challenges of the multi-core and parallel computing space.”GHC now supports a burgeoning professional Haskell world. The O’Reilly book Real World Haskell, targeted to professional programmers and oriented to GHC, was published in 2008. It went on to win the Jolt Award for best technical book of the year. In 2009 there were 3500+ Haskell package updates, with more than 100,000 package downloads in November alone. GHC is now used across the financial sector in institutions like Credit Suisse and Standard Chartered Bank, and for high assurance software in companies like Amgen, Eaton, and Galois. Some of these companies came together in 2009 to create the Industrial Haskell Group, whose purpose is to ensure the health and longevity of GHC.

499 words. Whew! There is so much that could be said, but let’s hope this is enough. I think the case is very strong, and both Simon’s deserve honor and accolade for their work. Thank you both so much!

Read More