I’m excited to announce our paper “ANOSY: Approximated Knowledge Synthesis with Refinement Types for Declassification” has been accepted to PLDI 2022.
Programs often need to reveal information about private or sensitive data. For example, consider an advertiser who wants to send an advertisement to users near a restaurant. To accomplish this, the advertiser could write a query asking a social media network for the location of every user. This query would be an unsafe declassification since it reveals private user information, specifically the exact user location. A safer query would be to ask whether users are within 50 miles of the restaurant.
The paper presents a method to ensure that programs only answer safe queries that do not reveal too much about private information. This is accomplished by quantifying how much information a query reveals and enforcing user-defined quantitative declassification policies that limit how much information an adversary can learn from declassification queries.
If you are interested in learning more about how we use program synthesis to automatically generate posterior belief distributions and mechanically verify the posteriors, check out the full paper here. Thanks to my co-authors Sankha Guria, Niki Vazou, and Marco Guarnieri.