The “FREAK” TLS/SSL flaw, and related thoughts

“Formal verification methods…should be considered the prime choice for verification of complex and mission-critical software ecosystems.”

New vulnerabilities in the software infrastructure we all depend on for privacy are discovered frequently. Thus it was not surprising when an INRIA, MSR, and IMDEA team announced discovery of a significant TLS/SSL vulnerability. The surprise in this announcement was both the intentionally designed nature of some aspects of the vulnerability, and the length of time that the vulnerability has been in play. The team reported that OpenSSL and Apple TLS/SSL clients share a defect that allow a MITM attack to force them to handle secure sessions with weak “export-grade” (512b RSA key) security that makes factoring attacks feasible (full details at A combination of this defect and certain intentional weaknesses in popular TLS/SSL server implementations creates a vulnerability that makes it possible to compromise privacy and integrity of secure communications between clients and servers.

Three behaviors in TLS/SSL components contribute to the “FREAK” vulnerability:

  1. The OpenSSL and Apple TLS/SSL clients have a defect where a client will accept weaker, “export-grade” (512b) RSA keys (rather than today’s more secure 1024b or larger keys) from a server even if they did not request such weaker security from the server
  2. A surprising number of TLS/SSL servers (something like 36% of the roughly 14 million servers that serve browser-recognized certificates) still support “export-grade” security if requested, long after the statutory requirement for it lapsed
  3. Because RSA key generation is relatively costly, typical TLS/SSL servers generate a single export-grade key each time the server boots, and re-use that same key for all export-grade sessions until the server goes down (instead of generating a new key for each session, as good crypto practice suggests)

Together, these three behaviors can be exploited to compromise privacy and integrity of a secure connection. Here’s how the exploit works in simple terms:

  • a client such as an Android device contacts a web server and requests a standard RSA suite
  • an attacker, perhaps on the local wireless LAN used by the client, intercepts this request and changes it to ask for the export-grade RSA suite instead and then sends it on to the server
  • the server accepts this request, and due to behavior (2) responds to the original client with a 512b (export-grade) RSA key instead of the standard security key
  • due to behavior (1), the client silently accepts this key and adapts to use export-grade crypto
  • meanwhile, the attacker conducts a factoring attack against the RSA modulus in use and is able to recover the RSA decryption key. In early 2015, this attack takes under 8 hours and costs about $100 in cloud computing
  • when the client sends its candidate pre-master secret to the server, the attacker intercepts it and decrypts it, allowing recovery of the session master secret
  • the attacker can now see (and modify) the entire conversation between client and server as if it were “in the clear”, because it has the master key to the conversation
  • due to behavior (3), the attacker can see and modify all export grade sessions for that server as long as the server stays up

Why TLS/SSL servers support export-grade crypto

The Secure Sockets Layer (SSL) and its successor, TLS, are the de facto solution for most secure communication on untrusted networks such as the Internet. These protocols use RSA public key (aka asymmetric key) cryptography to securely exchange symmetric crypto keys that are then used for the bulk of communication. RSA supports various levels of security, but its 512b security level is no longer considered secure because it can be broken in a matter of hours. When designed in the early 90’s, 512b security was already considered “weak”, while larger key sizes were considered “strong”. At that time, the US government restricted export of strong crypto (possibly for national security reasons), but allowed export of the weaker version. This strategy meant that in order to communicate with both foreign and domestic partners, US-based servers needed to support both the stronger and weaker versions, along with a protocol for negotiating the use of one or the other at session initiation. Even though the policy that created this weakness is long gone, it turns out that this functionality was not removed from many SSL implementations.

Formal methods and systematic testing

Discovery of the FREAK vulnerability is yet another demonstration that long-standing crypto protocols may function in unintended ways that compromise security. On one hand, validating such protocols by typical testing approaches may (even after many years) not exercise vulnerable code paths. However, formal verification methods and highly systematic testing can detect such weaknesses, as shown by the INRIA, MSR, and IMDEA team. The team first developed and formally verified correctness of a reference TLS implementation, known as miTLS. Using this formally proven reference, they systematically generated incorrect transactions and used this “fuzzing” technique to test the response of existing TLS implementations. Unexpected responses were taken as indicators of potential vulnerabilities, and where possible were developed into exploits used to communicate the nature of the vulnerabilities to the implementors of the libraries being tested. Thus a combination of formal verification and systematic test coverage of the input space led to discovery of important vulnerabilities in highly complex crypto software.

Key take-away

Crypto algorithms and protocols verified in this way are termed “high-assurance” crypto. Formal verification methods offer a substantial advantage in achieving the goal of high assurance, and should be considered the prime choice for verification of complex and mission-critical software ecosystems.