Viewing Results for "cryptol" (6 of 6 Pages)

Verifying ECC Implementations

Last Thursday, the University of Bristol posted a press release and paper describing a way to exploit a bug in version 0.9.8g of OpenSSL and extract the value of a private key. The bug was known, and has been fixed in recent versions of OpenSSL (0.9.8g was released in 2007, and 0.9.8h fixed the bug […]

Read More

Galois to Help DARPA PROCEED to Change the Game

Portland, OR (April 11, 2011) – The Defense Advanced Research Projects Agency (DARPA) has awarded up to $4.7M to Galois, Inc., as research integrator for the PROCEED program (Programming Computation on Encrypted Data) whose goal is to make it feasible to execute programs on encrypted data without having to decrypt the data first.  DARPA Program […]

Read More

Merging SMT solvers and programming languages

Galois is in the business of building trustworthy software. Such software will have well-defined behavior, and that behavior is assured in some way, whether via model checking, testing, or formal verification. SMT solvers — extensions to SAT solvers with support for variables of non-boolean type — offer powerful automation for solving a variety of assurance […]

Read More

POPL 2010, Day 1

Here are some talks that caught my attention on the opening day of POPL 2010.Reconfigurable Asynchronous Logic Automata, Neil GershenfeldIn this invited talk, Neil proposed that we throw out the sequential model of computation from Turing and Von Neumann, and re-imagine computing from the ground up, more along the line of physics as it is manifested in the world. This might be the way forward given the huge power demands that will arise from traditional architectures as they become more and more powerful. Rather than imposing a top-down structure on the machine, Neil propose we consider systems built up from cellular automata. In ALA each cell is able to do a basic logic operation or, in RALA, reconfigure other cells. Built around 2D grids, and computing in parallel without any global clock, the automata are able to compute many basic functions in linear time, including multiplication and sorting (of course, they also take a linear number of cells to do so).What I liked about this talk is that it called on me to think more broadly about the nature of computation. I don’t think I buy the line that how we do computing today is wrong. But I do believe that computing and information theory is going to become more and more important in the development of many other subjects, including physics and biology. I also believe that we will need to harness many more structures to do computing rather than simply relying on gates burned in silicon. Coincidentally, at lunch today Luca Cardelli was telling me about his explorations in computing with DNA. Fascinating stuff!At the lowest levels, the primitive components of computation reflect the computational substrate, so they are very different from one instantiation to another. As primitive computational elements become combined, however, the different descriptions become more abstract, and start to look more and more alike. The same mathematical operations may arise in each form, for example. That makes me wonder what the space of truly computationally-neutral specifications looks like: how might we program in such a way that we really don’t have a preference for what substrate we map the computation onto. We already consider FPGAs, GPUs and CPUs as standard targets for languages like Cryptol. Let’s conceptually add DNA and RALA to the list, and see if that would have us change anything.A Simple, Verified Validator for Software Pipelining, Jean-Baptiste Tristan & Xavier LeroySoftware pipelining is a compiler transformation that reorders loop code to make more efficient use of the CPU and memory accesses. Code from many different iterations of the loop might be drawn together and overlapped to execute out of order, and perhaps in parallel. The question is how to verify that the pipelining has been done correctly.The paper proposes using symbolic evaluation. It describes an equivalence principle which requires proving two separate commutativity properties. These may both be shown by symbolic evaluation of the instruction code, being careful to handle the small differences (such as different temporary variables in the two paths). Overall the process is simple enough that it could be a standard component of any compiler, even if the compiler is non-verifying overall.A Verified Compiler for an Impure Functional Language, Adam ChlipalaHandling object-level variables in proofs is a real pain, one that is hard to eliminate. This paper has an interesting way of dealing with them. It describes a Coq verification of a compiler for Untyped Mini-ML with references and exceptions, compiling to an idealized assembly code.Now, using explicit object variables leads to lots of lemmas about substitution. On the other hand, full higher-order abstract syntax (HOAS) would lead quickly to non-termination, and hence unsoundness. So instead, the paper introduces a closure semantics in which variables are just numbers pointing into a heap. Many operations that would reorder substitutions (and hence require big proofs) don’t affect the closure heap, so the proofs go through easily.The paper also uses a high degree of proof automation: it has a generic proof script that looks at the hypotheses and goals, and decides what approaches to try. The proof script is a bit slow because the evaluation mechanism of Coq wasn’t designed for this particularly (but see my blog post from PADL about accelerating normalization). However, because the proof is highly automatic, it is not hard to evolve the compiler and proof simultaneously. In fact, the compiler and proof were developed incrementally in exactly this way.Verified Just-in-Time Compiler on x86, Magnus MyreenJIT compiling has become standard for accelerating web pages, amongst other things. Good JIT compilers can take into account the input to the program as well as the program itself. But it is hard to get them right, as witnessed apparently by a recent Firefox bug.An aspect that makes verification tricky is that code and data are mixed: a JIT compiler essentially produces self-modifying code. The paper provides a programming logic for self-modifying code by adapting Hoare triples so that there is both “before” and “after” code. Some practical issues also need to be handled: for example, the frame-property has subtleties regarding the instruction cache, and code-updates can sometime happen too recently to show up in the current instruction stream. The x86 documentation is a little vague about how soon code-modifications can be relied on to be present.The end result is a verified JIT compiler from a toy input byte-code language, but mapping to a realistic semantics for x86. The compiler produces comparable code quality to C for a GCD example.

Read More

Trustworthy Voting Systems

Accurate and reliable elections are a critical component of an effective democracy. However, completely secure and trustworthy voting procedures are difficult to design, and no perfect solutions are known. Ideally, a trustworthy voting system should guarantee both verifiability (the ability to prove that the counted vote matches the submitted ballots) and privacy (the inability to link the contents of a vote with the voter who cast it).These guarantees may now be achievable. Many researchers have proposed voting protocols that achieve verifiability and privacy in theory, and a few do so under assumptions that are satisfied by current election practices. Most of the protocols involve posting an encrypted version of the contents of every ballot in some public place (likely a web site), and depend on the properties of cryptographic operations to achieve privacy while allowing anyone to verify the final tally. Now that practical, secure voting protocols exist, the time has come to bring them into use. One existing solution that comes close to achieving these goals while retaining compatibility with current voting practices is the Scantegrity II system. It has the advantage that it can operate under current US election conditions, without requiring any modification to existing optical ballot scanners, and with very little change to the individual voting process. However, the software used in this system is only a prototype, with a number of shortcomings. Voter privacy depends on ability of a computer system to keep a key database completely secret, and accurate vote counting depends on the correct implementation of complex cryptographic algorithms. The software is tens of thousands of lines of code, and as with any other software of that size, many bugs certainly exist. We believe that the importance of trustworthy election results and the past lack of success in creating reliable solutions warrants a new approach to the design of voting systems. In particular, we advocate a class of techniques known as formal methods that allow us to make precise mathematical assertions about how software should behave, and determine whether it satisfies those assertions. Government agencies within the Department of Defense make use of formal methods to ensure the reliability of important computer systems, and the draft update to the development standards used by the Federal Aviation Administration, DO178C, includes provisions for the use of formal methods. Voting systems deserve similar care.

Read More