Architectural Security, the Ardennes, and Alfred the Great

This article originally appeared in the Spring 2018 edition of the U.S. Cybersecurity Magazine

Much of cyber defense today relies on the same approach used in kinetic defense over the last few thousand years. We use hard perimeters (firewalls) to repel attacks, sentries (IDSs) to trigger incident response, and carefully guarded entry points (VPNs, websites) to meet functional requirements (wait…security is still a non-functional requirement?). It’s both a poor defense, and indicative that we have a poor model of our cyber adversaries.

Admittedly, the standard defense model is easier and less (immediately) costly than the alternative of hardened applications and databases. Nobody seems to notice, though, how that defensive strategy often worked out for the defenders. “I have a firewall” worked out poorly for example on May 12, 1940, when a poorly supported and organized Wehrmacht Army Group A punched through the “impenetrable perimeter” of the Ardennes at Sedan, 20 kilometers west of where Fort No. 505 terminated the Maginot Line, and sprinted through France’s “soft center” to the English Channel — sealing the fate of France and Belgium in WWII.

Statue d'Alfred le Grand à Winchester.jpg

Alfred the Great. By Odejea, CC BY-SA 3.0, Link

Fast rewind to 895AD. Wessex. Alfred the Great. Over 20 years, Alfred studied his enemy, and carefully organized Wessex into a checkerboard defense, combining an array of fortified, garrisoned towns with a standing mobile field force. How did that work out? Wessex was the sole kingdom and royal house in England to survive the micel hœden here — the “great heathen army” — a loose confederation of viking war bands that destroyed nearby Northumbria, East Anglia, and Mercia. 1,107 years later, Alfred ranked 14th in a poll of the “100 greatest Britons”.

Alfred’s defense-in-depth, what Miksche called “islands of resistance”, assumed that attackers would penetrate every defensive perimeter, and that localized defensive strength had to stand alone. This defense might have failed against an invading force intent on destroying the checkerboard. But against a “limited aims” offensive strategy (one that seeks to plunder specific targets), the checkerboard paid off. As it turns out, it also paid off against an adversary intent on destroying the defenders in 1942, when Auchinleck stopped Rommel at the first battle of el Alamein. Today, we call the checkerboard a “zero-trust model”. It assumes that every network and host is already compromised and may be malicious.

Interesting, isn’t it, how the “limited aims” attack strategy models modern cyber attacks better than the “destroy the defender” strategy? So why do we still choose defenses against the latter when most often faced with the former?

But…can application developers ever be motivated, like Alfred, to design security into every application from the ground up? So far, it’s been an easy escape to cite expensive development and distraction from functional requirements – a “dancing pigs” argument. Perhaps, given sufficient commitment and exemplars, we can change that tide. I’ve recently been lucky to have a view into a “security, built-in” approach to new application development at Callisto, a non-profit working to combat sexual assault and harassment in a variety of industries. A quick tour of their raison d’être and technical approach to information security may offer a good example of a better way.

Callisto’s validated premise is that reporting an incident of sexual assault is both significantly more rapid and likely when survivors know they are not the only victim of an assailant. In Callisto’s new invitation-only system, users from partner organizations can submit detailed information about incidents and perpetrators, along with relevant personal data about themselves. Invited users receive encrypted email invitations to activate accounts on the system and verify their identity. Once verified, users are free to submit incident reports that include one or more identities of the accused perpetrator: a cell phone number, a social URL, an e-mail address. When a perpetrator is identified by more than one victim, a lawyer will reach out to each victim individually (and if appropriate, may connect the group of matched victims together) to help them find their desired pathway to justice.

Callisto’s data corpus is thus highly sensitive personally identifiable information, and a rich target for exfiltration. In response, Callisto’s design is a model for a new generation of application designs where zero trust systems is the new black. Callisto’s security stance is driven by the NIST Risk Management Framework. Comprehensive use of privacy-preserving encryption technologies, cryptographic proofs, multi-factor authentication, and best practices in system security design protect user data and activities. Personal information of users, their accounts of incidents, and the identities of perpetrators are encrypted before they leave the user’s browser and remain encrypted until they are decrypted on the personal workstation of a lawyer. That is, Callisto servers never store or compute on data about users, incidents, or victims in plaintext form.

In addition, everyone — even Callisto’s trusted lawyers — is cryptographically prevented from accessing incident or perpetrator identity information unless more than one user has identified the same perpetrator. This part of Callisto’s security story begins when the user’s browser interviews the user and collects detailed information about the incident and perpetrator. Because matching perpetrator identities among disparate users and incidents is a key part of Callisto functionality, those identities must be stored in a way that allows their comparison but prevents them from being learned. First, a pair of servers aids the client browser by providing an oblivious pseudo-random function service that boosts entropy of the perpetrator identity without the servers learning that identity. That boosted value is always the same for a given identity, giving a common reference point among users. Next, the client creates a secret share of the boosted value (by mapping it to a point in the space of possible perpetrator identities); and also derives a pseudo-random value from the boosted value that allows discovery of perpetrators in common among distinct users.

The pseudo-random value is used by the system to periodically perform an off-line search for multiple occurrences of a perpetrator. Matching is done without access to perpetrator identities or incident records in unencrypted form, so no adversary penetrating the servers can learn anything about perpetrator identities or incidents from the data stored there, or from the matching process.

The secret share is encrypted with the public key of a Callisto lawyer. Possession of a single decrypted share offers no information. However, possession of two or more shares from distinct users for a matching perpetrator allows the lawyer to plot a line in the perpetrator identity space, find its intercept, and thus recover the perpetrator’s identity. Because the perpetrator identity is the source of a key used to encrypt the relevant incident report, the lawyer can then decrypt those incident reports, identify the reporting users, and then decrypt their personal contact information and begin the resolution process.

These techniques, along with others we haven’t covered, make Callisto a hardened application that treats information security as a first-class requirement — part of a checkerboard defense appropriate for the limited aims adversary model, and cryptographically hard to defeat. It sets dancing pigs aside in favor of a design paradigm that more organizations should follow in stewarding sensitive information. The Callisto crypto demonstration and white paper can be found at https://cryptography.projectcallisto.org . Take a minute, and look at an ancient new way to make your organization secure: one app at a time.