Here are some talks that caught my attention on the opening day of POPL 2010.Reconfigurable Asynchronous Logic Automata, Neil GershenfeldIn this invited talk, Neil proposed that we throw out the sequential model of computation from Turing and Von Neumann, and re-imagine computing from the ground up, more along the line of physics as it is manifested in the world. This might be the way forward given the huge power demands that will arise from traditional architectures as they become more and more powerful. Rather than imposing a top-down structure on the machine, Neil propose we consider systems built up from cellular automata. In ALA each cell is able to do a basic logic operation or, in RALA, reconfigure other cells. Built around 2D grids, and computing in parallel without any global clock, the automata are able to compute many basic functions in linear time, including multiplication and sorting (of course, they also take a linear number of cells to do so).What I liked about this talk is that it called on me to think more broadly about the nature of computation. I don’t think I buy the line that how we do computing today is wrong. But I do believe that computing and information theory is going to become more and more important in the development of many other subjects, including physics and biology. I also believe that we will need to harness many more structures to do computing rather than simply relying on gates burned in silicon. Coincidentally, at lunch today Luca Cardelli was telling me about his explorations in computing with DNA. Fascinating stuff!At the lowest levels, the primitive components of computation reflect the computational substrate, so they are very different from one instantiation to another. As primitive computational elements become combined, however, the different descriptions become more abstract, and start to look more and more alike. The same mathematical operations may arise in each form, for example. That makes me wonder what the space of truly computationally-neutral specifications looks like: how might we program in such a way that we really don’t have a preference for what substrate we map the computation onto. We already consider FPGAs, GPUs and CPUs as standard targets for languages like Cryptol. Let’s conceptually add DNA and RALA to the list, and see if that would have us change anything.A Simple, Verified Validator for Software Pipelining, Jean-Baptiste Tristan & Xavier LeroySoftware pipelining is a compiler transformation that reorders loop code to make more efficient use of the CPU and memory accesses. Code from many different iterations of the loop might be drawn together and overlapped to execute out of order, and perhaps in parallel. The question is how to verify that the pipelining has been done correctly.The paper proposes using symbolic evaluation. It describes an equivalence principle which requires proving two separate commutativity properties. These may both be shown by symbolic evaluation of the instruction code, being careful to handle the small differences (such as different temporary variables in the two paths). Overall the process is simple enough that it could be a standard component of any compiler, even if the compiler is non-verifying overall.A Verified Compiler for an Impure Functional Language, Adam ChlipalaHandling object-level variables in proofs is a real pain, one that is hard to eliminate. This paper has an interesting way of dealing with them. It describes a Coq verification of a compiler for Untyped Mini-ML with references and exceptions, compiling to an idealized assembly code.Now, using explicit object variables leads to lots of lemmas about substitution. On the other hand, full higher-order abstract syntax (HOAS) would lead quickly to non-termination, and hence unsoundness. So instead, the paper introduces a closure semantics in which variables are just numbers pointing into a heap. Many operations that would reorder substitutions (and hence require big proofs) don’t affect the closure heap, so the proofs go through easily.The paper also uses a high degree of proof automation: it has a generic proof script that looks at the hypotheses and goals, and decides what approaches to try. The proof script is a bit slow because the evaluation mechanism of Coq wasn’t designed for this particularly (but see my blog post from PADL about accelerating normalization). However, because the proof is highly automatic, it is not hard to evolve the compiler and proof simultaneously. In fact, the compiler and proof were developed incrementally in exactly this way.Verified Just-in-Time Compiler on x86, Magnus MyreenJIT compiling has become standard for accelerating web pages, amongst other things. Good JIT compilers can take into account the input to the program as well as the program itself. But it is hard to get them right, as witnessed apparently by a recent Firefox bug.An aspect that makes verification tricky is that code and data are mixed: a JIT compiler essentially produces self-modifying code. The paper provides a programming logic for self-modifying code by adapting Hoare triples so that there is both “before” and “after” code. Some practical issues also need to be handled: for example, the frame-property has subtleties regarding the instruction cache, and code-updates can sometime happen too recently to show up in the current instruction stream. The x86 documentation is a little vague about how soon code-modifications can be relied on to be present.The end result is a verified JIT compiler from a toy input byte-code language, but mapping to a realistic semantics for x86. The compiler produces comparable code quality to C for a GCD example.
Categories
- Artificial Intelligence
- Conferences
- Cryptography
- Cyber-Physical Systems
- Data Science
- Demo
- Digital Engineering
- Domain Specific Languages
- Elections
- Formal Methods
- Functional Programming
- Hardware
- Highlights
- Human Factors Engineering
- Life at Galois
- Machine Learning
- Mobile Security
- Network Security
- News
- Open-source
- Security
- Systems Software
- Tech Talks
- Tools
- Uncategorized