Mechanism Design and the CAMDEN Program

At Galois, our work isn’t just limited to theoretical computer science; we also engage with research challenges that straddle many disciplines. One of our ongoing efforts for DARPA, the Collaborative APIs through Mechanism Design and Engineering (CAMDEN) program, focuses on the field of Mechanism Design, which lives at the intersection of Game Theory, Behavioral Economics, and Computer Science. 

Game theory has been called the “science of strategic interaction,” and is the study of how self- interested parties interact when pursuing their goals, and how the payoffs for their actions depend on the choices made by other parties. In game theory, we analyze a set of strategic interactions in an effort to understand the incentives that led the players to make the decisions/moves they did. Mechanism design approaches decision-making from the inverse perspective, asking: how do we design a scenario so that players will be incentivized to make decisions/moves that achieve a set of desired outcomes? The central idea in the field of mechanism design is the notion that people respond to incentives, and thus, desired outcomes can be designed for.                                       

We’re surrounded by the fruits of mechanism design, even though we typically don’t notice. Examples of mechanism design-influenced entities abound: Federal Communications Commission (FCC) frequency spectrum auctions (and online auctions in general), organ donor matching, medical residency assignments, shopping malls, video game platforms, and computer operating systems. Anytime you come across the word ‘platform’ in a business context – the specific design of that platform benefits from the insights cultivated from the field. 

Most of these successful examples of mechanism design focus on economic incentive structures (market share, profit and loss, etc.). Yet what works for the commercial world is not necessarily appropriate for the Department of Defense, whose goals are focused on maintaining national security. With the CAMDEN project, Galois is exploring the use of mechanism design as a solution to effectively incentivize and accelerate collaboration in critical DoD domains, including export control strategies for supply chain security, natural disaster and pandemic response, cyber deterrence, and especially the defense acquisition process. 

Each of these areas is fraught with unique and thorny complexities. We’re looking to study them closely, model and simulate the human decision-making behavior at the heart of each, and then explore how different variables can be changed to push that behavior in one direction or another. Ultimately, our goal is to build one or more deployable mechanisms to serve as a generalizable solution: a dynamic software program to model incentive structures for optimal design of complex systems. 

Let’s dig in. 

Expanding the Scope of Mechanism Design

Traditional economics often portrays humans as ‘fully rational’ decision-makers, where rational means that you are not only acting in your self-interest, but you assume that others are doing the same. However, numerous experiments over the past few decades have demonstrated that characterizing human beings as “utility maximizers” (i.e., trying to optimize financial gain) is not a good approximation of actual human behavior in the real world. This has given rise to the subfield of ‘behavioral economics,’ which treats people as having motives more complex than being so-called ‘rational actors’ trying to solve mathematical optimization problems.  

Here’s a well-known example. Suppose we are given the following two scenarios, and asked what choice we’d make in each scenario: 

  1. Scenario 1: a choice between:
    1.  a 100% chance to gain $500 versus 
    2.  a 50% chance to gain $1200
  2. Scenario 2: a choice between:
    1.  a 100% chance to lose $500 versus 
    2.  a 50% chance to lose $1200

In Scenario 1, most subjects prefer option a, which locks in the sure thing $500 instead of taking a chance to win $1200 in option b (even though the expected value of option b is $600). But in Scenario 2, most subjects would rather not lock in the sure loss of $500 in option a, and would instead prefer to take a 50% chance of walking away with no loss at all (even though the expected value of option b is a loss of $600). These experiments show that people don’t treat gains and losses the same, nor do they necessarily make mathematically optimal choices. 

Results like this point to one of the central research challenges for the field of mechanism design in general: how can we expand the scope of domains amenable to mechanism design techniques to ones that can’t easily be reduced to a purely economic cost/benefit tradeoff? As mentioned, this question is at the heart of the CAMDEN program: how do we conceptualize, design, and build incentive frameworks that can incorporate motives and incentives that are ‘psychological’, not purely economic?

Aggregating Psychological and Economic Incentives

One of the challenges for CAMDEN is that economic and psychological incentives can be at odds with each other, and there’s an understandable concern that social norms and virtues can be “crowded out” by purely economic, self-regarding interests. The canonical example of this is the inadvertent experiment at an Israeli daycare center – initially there was a social norm in place at the daycare that parents would rarely be late to pick up their children – parents wanted to be perceived as “good citizens.” But when the daycare attempted to further improve the incidence of late pickups by instituting fines, there was a seemingly counterintuitive, but significant increase in late pickups. The explanation for this phenomenon is that by establishing fines, the daycare had turned pickups into transactional events – instead of parents wanting to avoid feeling guilty for making the teachers stay late, it was now a simple economic decision of whether to buy the extra time.

It’s tempting to conclude from the previous example that psychological and economic incentives are destined to be at odds with each other, but it’s not that simple – sometimes the two incentive classes can be reinforcing. An example of this comes from microfinance experiments, where participants are looking to receive a sequence of very small loans; new loans are contingent upon repayment of an existing one. Borrowers were paired into two groups:

  1. Individual Liability (IL) – individuals borrow in the first period, and are able to get a second loan period only if they repay the first one.
  2. Joint Liability (JL) – groups of borrowers take a loan in the first period (often for unrelated projects); a second load period is granted only if everybody in the group repays during the first period.

The repayment rate in the Joint Liability group was significantly higher than the first group.  Researchers found that emotional factors such as guilt and shame were decisive in terms of borrower behavior: instead of simply being ‘rational actors,’ there was an additional incentive to not lose face in the community by ruining future opportunities for others in your JL group.

Another factor in the design of usable incentive mechanisms that we need to incorporate into our thinking is that of cheating and collusion. Leonid Hurwicz won the Nobel Prize in Economics in 2007, specifically for his work in “incentive compatible mechanism design.” His Nobel Prize lecture was entitled “But Who Will Guard the Guardians?” in which he discussed the temptation of players to ‘jump outside the game’ to reap bigger gains, and the need for mechanism designers to include the means for deterring and/or punishing cheaters.

As an example of this temptation, there have been instances of online auctions where players have used their bids not to signal their interest in an item, but to threaten other bidders. As an simplified illustration of this real-world scenario, imagine an auction in which bids are typically made in round multiples of one hundred dollars (e.g., $400, $700, etc.), and that in this auction bidder A is interested in both item #5 and item #6, and bidder B is only interested in item #6.  Halfway through the auction, A has the current high bid of $800 on both item #5 and item #6.  Suddenly, B responds by bidding $906 on item #5, an item that they supposedly have no interest in, and with a bid that is not a multiple of $100 – what’s going on?

The answer is that B is using their overbid of $906 on an item that they don’t want (but A does) as a clear signal to A to stay away from item #6. It worked – A stopped bidding on item #6, which was won by B.   

The moral of the story here is that incentive design goes far beyond simple economic calculations – psychological motivations, both cooperative and selfish, need to be considered in the design process, especially the rich interactions between them.

Bringing Mechanism Design to the DoD – “Skin in the Game”

As noted, the findings from mechanism design have been successfully applied in commercial settings – the CAMDEN program is about applying these benefits to the DoD. An important observation from the discipline is that the precise shape of the incentive structures put into place matters, and what works for the commercial world is not necessarily going to be appropriate for the DoD.

The CAMDEN solution for applying mechanism design to the DoD is to unify economic and psychological motives through the principle of “Skin in the Game.”  To have skin in the game in a scenario means that you have a stake in the outcome – when things go well, you’re rewarded, and if things go poorly, you take the consequences. And our research thus far has identified the phenomenon of risk aversion as the primary obstacle to experiencing skin in the game.

Risk aversion shows up in the DoD as both social and institutional norms inside an organization.  For instance, the social norm of “if it’s not explicitly allowed, it’s forbidden,” which creates situations in which an institution’s “stop energy” dominates its “go energy.” In these situations, organizations actively eliminate skin in the game through the creation of “accountability sinks,” in which there are no real decisions to be made, and so no consequences (pro or con) are attached to the outcomes. If we’re going to create skin in the game in the DoD, we’re going to need to mitigate risk aversion.

Our work in CAMDEN is focused upon understanding these phenomena from both a theoretical (analysis and simulation) perspective and an empirical (experiments such as tabletop exercises) perspective. The outcome of this work will be a ‘playbook’ that both educates DoD program managers and innovators as to how incentive structures work in DoD-like settings, and can guide concrete and effective actions that will organically incentive good outcomes.   

Looking ahead, our hope is that the CAMDEN playbook and platform will serve as a foundational framework for designing systems that not only account for but optimize both economic and psychological incentive structures. Fully realized, this could enable in a more efficient defense acquisitions process where competing vendors and platform owners communicate more efficiently and effectively, resulting in higher quality deliverables and faster schedules; improved analysis and understanding of semiconductor supply chain challenges, helping shape optimal export control strategies; and even better modeling and strategies for an optimal global pandemic response. 

Stay tuned!


Distribution Statement “A” (Approved for Public Release, Distribution Unlimited)

The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.