LAGOON: An Analysis Tool for Open Source Communities

At the Mining Software Repositories (MSR2022) conference in May, we presented our LAGOON tool resulting from the DARPA SocialCyber AIE, and led a discussion session on reducing complexity of machine learning. LAGOON provides a comprehensive platform for analyzing and investigating open-source software (OSS) communities for potentially malicious contributors. This is accomplished by ingesting multiple types […]

Read More

Should It Be Easier to Trust Machines or Harder to Trust Humans?

This blog post derived from a presentation given 2021-11-12 at a workshop for the University of Southern California’s Center for Autonomy and Artificial Intelligence. Black-box machine learning (ML) methods, often criticized as difficult to explain, can derive results with an accuracy that matches or exceeds human ability on real-world tasks. This has been demonstrated in […]

Read More

Trustworthy Data Integration: Machine Learning to Expose Financial Corruption

The world was taken by storm when the International Consortium of Investigative Journalists (ICIJ), along with other media bodies, released millions of documents exposing financial chicanery and political corruption. The leaks detailed how prominent people, such as Icelandic Prime Minister Sigmundur Davíð Gunnlaugsson, used offshore entities for illegal activities. Perhaps the most famous of these […]

Read More

Providing Safety and Verification for Learning-Enabled Cyber-Physical Systems

  • Matthew Clark

Machine learning has revolutionized cyber-physical systems (CPS) in multiple industries – in the air, on land, and in the deep sea.  And yet, verifying and assuring the safety of advanced machine learning is difficult because of the following reasons:  State-Space Explosion: Autonomous systems are characteristically adaptive, intelligent, and/or may incorporate learning capabilities.  Unpredictable Environments: The […]

Read More