Advances in deep neural networks (DNNs) have increased their deployment in safety-critical systems, such as vision perception modules for autonomous vehicles and airborne collision avoidance system controllers for unmanned aircraft. Modern DNNs and linear classifiers are susceptible to adversarial input perturbations. Adversarial perturbations are small changes to an input that result in unexpected changes of the output. It is, therefore, critical to verify their adversarial robustness properties to avoid undesirable and even catastrophic behaviors. The problem of proving the adversarial robustness property is extremely challenging due to high dimensionality of the input spaces and large and complex modern neural networks. In this talk, I will present a highly versatile refutation-based adversarial robustness verification framework that performs abstraction and partitioning of discretized hyperrectangular regions in the input space of classification networks. A novel technique is presented for reliable adversarial example generation with improved variance. Coupled with a gradient-based dimension ranking heuristic, it enables the framework to place priority on refuting the adversarial robustness property to efficiently eliminate unsafe regions due to discovered adversarial examples and provide incremental feedback for retraining and analysis purposes. This work evaluates the efficiency of several dimension ranking heuristics on adversarial example generation and partitioning strategies of the proposed framework. I will also briefly describe my other research activities, including an efficient probabilistic verification tool for the analysis of infinite-state synthetic biological models and formal verification of Network-on-Chip systems.
Dr. Zhen Zhang is an assistant professor in the Electrical and Computer Engineering Department at Utah State University. His primary research focus is in the formal verification of concurrent and probabilistic systems in the following application areas: deep neural networks, synthetic biological network models, distributed on-chip routing protocols used in cyber-physical systems. He is interested in formal specification of case studies from the aforementioned areas and developing algorithms and tools to advance formal and semi-formal verification techniques in both conventional and stochastic settings. His research group has developed several open source tools, including VERAPAK, a refutation-based adversarial robustness deep neural network verification tool, and STAMINA, an efficient state truncation-based probabilistic model checking tool. Zhen’s research has been supported by the National Science Foundation and Adobe Systems Incorporated.
This talk was live-streamed and a recording is available by clicking the YouTube image at the top of this page.