Adversary Resistance

AI/ML systems face a growing threat from adversaries aiming to bias or otherwise degrade inferential capability and operators’ decision-making abilities. The spectrum of attacks can range from the dramatic, like destroying a radar station supplying critical data to a threat analysis tool, to subtler subversions, such as putting stickers on stop signs to mislead a smart car’s visual perception system. The most insidious tampering tactics are those that introduce input noise that is imperceptible to humans, but can still significantly derail a model’s performance.

At Galois, we research and develop advanced adversary resistance strategies, ensuring that AI/ML models still function as intended and remain trustworthy even under the most challenging conditions or attacks. 

Leveraging our team’s considerable skills in formal methods and the cognitive sciences, our approach encompasses both neuro-symbolic and bio-inspired techniques, training AI/ML models to better account for complex contextual factors and effectively filling gaps in reasoning that could otherwise lead to misguided actions or analysis.