Cryptography continues to rapidly transform our world. It seems like every day there’s a new story about fully homomorphic encryption, blockchains, and how these technologies secure billions and even trillions of dollars in assets.
We’ve talked about cryptographic algorithms and twice about cryptographic assurance. People who work with these concepts every day have been the primary audience for these blog posts.
But today we wanted to talk about why cryptography implementations might have exploitable flaws built-in from the ground up. If there is even the slightest imprecision inside these critical technologies, they might be vulnerable to cybersecurity threats.
Got your attention? Cool.
(Don’t worry, there’s good news, too.)
Here’s an example of how a system might have built-in flaws.
A mission-critical system demands a cryptography implementation. The development team most likely plans to write the program in a commonly-used programming language, such as C or Java. At some point in this process, the team creates (or finds) a specification for the program, which is usually written in English or in pseudo-code.
Neither type of specification is executable – that is, they can’t be run by a computer. (You’ll often find pseudo-code only in a PDF. Not ideal.)
Since these specifications can’t be run, the specification might already be imprecise.
Once the specification is written (or obtained), it’s time to implement the cryptography. If the cryptographer designed the hardware, implementation involves translating the algorithm into a hardware design language. This helps characterize the program – determining if it meets the design’s implementation requirements. If it doesn’t meet the requirements, the implementer might make changes to the hardware design, but that brings up a new problem – the implementer is not sure the changes they’ve made in the hardware match the original specification.
In fact, they can’t be sure if it’s working as originally intended.
So, yes, now they could have an incorrect cryptographic implementation without realizing it.
The design could be fabricated into a circuit at this point or placed into a field programmable gate array (FPGA). The best case scenario here is that no obvious errors turn up. The worst case is that errors are found, and the implementer would need to search for the cause.
Even in the best-case scenario, the implementer could still have a system that has a built-in security hole and is vulnerable to exploitation.
Okay, enough bad news. Here’s some good news: Galois makes tools that can prevent these flaws in the first place.
Technology That Works As Intended
The first way to combat the scenario related above is to build systems to work exactly as they were designed.
Next, these systems need to provide iron-clad evidence that they do, in fact, work as intended. Finally, as technology evolution requires software revision, the critical systems need to maintain that iron-clad level of assurance.
But how do you prove that your system works as intended without testing every single component?
Galois’s flagship tool for this type of work is Cryptol, which simplifies the design, implementation, and verification process for cryptographic implementations.
Cryptol is a domain-specific language for cryptography; in other words, it’s a programming language created to describe cryptographic algorithms. Cryptol has a solid 20 year track record and is currently used in globally distributed and automated systems.
Programmers and engineers can design or specify an algorithm in Cryptol. Cryptol can also be executed automatically. This allows programmers to revise their design to more closely match how it would be implemented in a working system.
This means that the specification is an actual program that can be run. (Much better than a pseudo-code specification in a PDF!) Implementers of the specification can actually experiment with and run the code that appears in the specification directly. Test cases can also be generated from Cryptol, which allows continuous assessment of the implementation’s correctness.
You could use these capabilities to write a program one time and then perpetually assure its correctness.
Galois uses other tools (such as SAW) to verify that the Cryptol specification and implementation is the same. However, Cryptol can automatically check and prove properties about Cryptol programs on its own.
Familiarity is also important; Cryptol has an “FPGA generator” that translates cryptographic algorithms into a hardware language using terminology the hardware implementer already knows. The cryptographic programmer won’t need to learn a new hardware language to make sure the implementation of the design works as intended.
Through its FPGA generator, Cryptol has the added benefit of reducing time-to-market for hardware systems – implementation always requires revision to variables such as energy consumption, operating temperature, and throughput.
The end result: a system that works as intended from design through implementation.
Galois has designed some systems that directly run Cryptol implementations. This is one reason we’ve live-blogged while using Cryptol. It is incredibly intuitive to use.
Future Technologies Depend on Correct-By-Construction
Technology is only going to grow more complex. For example, quantum computing could break current encryption standards. Digital signatures, in particular, could be compromised by entities using quantum computing methods.
We’re always evolving Cryptol, and based on current use cases, we can confidently say that Cryptol will accelerate the development of correct, high-performance, post-quantum algorithms.
It’s possible to start planning now to secure and harden systems against new advances in technology. Companies will need push-button automation to provide assurance that these new systems will always work as intended. Cryptol is the perfect tool for the technology of the present, and the future.