Formal methods + AI: Where does Galois fit in?

Thus far in our ongoing series on artificial intelligence we’ve spoken in depth on questions of trust, human perception, and limitations of generative models. We have focused specifically on large language models (LLMs), due in part to their recent successes and media attention. We’ve explored questions of data, testing, and broad model implications. However, LLMs […]

Read More

Generative AI, Mission Critical Systems, and the Tightrope of Trust

The public release of ChatGPT3 and DALL-E 2 radically changed our expectations for the near future of AI technologies. Given the demonstrated capability of large generative models (LGMs), the ways in which they immediately captured public imagination, and the level of publicized planned capital investment, we can anticipate rapid integration of these models into current […]

Read More

Harnessing Deep Learning to Model Complex Systems

Researchers at Galois have developed DLKoopman – an open-source software tool that uses machine learning to model and predict the behavior of complex, difficult-to-analyze systems. DLKoopman models a system from limited data, and then predicts how it is going to behave under unknown, often unmeasurable conditions, such as the pressure on a submarine at unknown […]

Read More

LAGOON: An Analysis Tool for Open Source Communities

At the Mining Software Repositories (MSR2022) conference in May, we presented our LAGOON tool resulting from the DARPA SocialCyber AIE, and led a discussion session on reducing complexity of machine learning. LAGOON provides a comprehensive platform for analyzing and investigating open-source software (OSS) communities for potentially malicious contributors. This is accomplished by ingesting multiple types […]

Read More

Should It Be Easier to Trust Machines or Harder to Trust Humans?

This blog post derived from a presentation given 2021-11-12 at a workshop for the University of Southern California’s Center for Autonomy and Artificial Intelligence. Black-box machine learning (ML) methods, often criticized as difficult to explain, can derive results with an accuracy that matches or exceeds human ability on real-world tasks. This has been demonstrated in […]

Read More

Trustworthy Data Integration: Machine Learning to Expose Financial Corruption

The world was taken by storm when the International Consortium of Investigative Journalists (ICIJ), along with other media bodies, released millions of documents exposing financial chicanery and political corruption. The leaks detailed how prominent people, such as Icelandic Prime Minister Sigmundur Davíð Gunnlaugsson, used offshore entities for illegal activities. Perhaps the most famous of these […]

Read More

Providing Safety and Verification for Learning-Enabled Cyber-Physical Systems

Machine learning has revolutionized cyber-physical systems (CPS) in multiple industries – in the air, on land, and in the deep sea.  And yet, verifying and assuring the safety of advanced machine learning is difficult because of the following reasons:  State-Space Explosion: Autonomous systems are characteristically adaptive, intelligent, and/or may incorporate learning capabilities.  Unpredictable Environments: The […]

Read More