There has been a lot of chatter recently about large language models, including GPT-4 and LLaMa. At Galois, we have been experimenting with GPT-4, the most capable available large language model. One group of intriguing results that I am excited to present is in the creation of SAWScript memory safety proofs. Using a very simple […]
Read More
While most engineers and scientists join Galois to be part of a company that conducts groundbreaking research, for our unique culture of collaboration, or for the great benefits and work-life balance, there’s a lesser-known but equally exciting perk of working at Galois: participating in the creation of spinouts. Throughout my time as a research engineer […]
Read More
We’ve all seen it—a couple on a date, politicians, friends, or colleagues talking right past each other, trapped in a moment of profound misunderstanding over the meaning of a single word. For me, that moment came when my partner, a New Yorker through and through, told me, a Midwesterner, to take “the next left” while […]
Read More
Thus far in our ongoing series on artificial intelligence we’ve spoken in depth on questions of trust, human perception, and limitations of generative models. We have focused specifically on large language models (LLMs), due in part to their recent successes and media attention. We’ve explored questions of data, testing, and broad model implications. However, LLMs […]
Read More
“We assume that the neuron is the basic functional unit, but that might be wrong. It might be that thinking of the neuron as the basic functional unit of the brain is similar to thinking of the molecule as the basic functional unit of the car, and that is a horrendous mistake.” – John Searle […]
Read More
The public release of ChatGPT3 and DALL-E 2 radically changed our expectations for the near future of AI technologies. Given the demonstrated capability of large generative models (LGMs), the ways in which they immediately captured public imagination, and the level of publicized planned capital investment, we can anticipate rapid integration of these models into current […]
Read More
Researchers at Galois have developed DLKoopman – an open-source software tool that uses machine learning to model and predict the behavior of complex, difficult-to-analyze systems. DLKoopman models a system from limited data, and then predicts how it is going to behave under unknown, often unmeasurable conditions, such as the pressure on a submarine at unknown […]
Read More
At the Mining Software Repositories (MSR2022) conference in May, we presented our LAGOON tool resulting from the DARPA SocialCyber AIE, and led a discussion session on reducing complexity of machine learning. LAGOON provides a comprehensive platform for analyzing and investigating open-source software (OSS) communities for potentially malicious contributors. This is accomplished by ingesting multiple types […]
Read More
This blog post derived from a presentation given 2021-11-12 at a workshop for the University of Southern California’s Center for Autonomy and Artificial Intelligence. Black-box machine learning (ML) methods, often criticized as difficult to explain, can derive results with an accuracy that matches or exceeds human ability on real-world tasks. This has been demonstrated in […]
Read More
The world was taken by storm when the International Consortium of Investigative Journalists (ICIJ), along with other media bodies, released millions of documents exposing financial chicanery and political corruption. The leaks detailed how prominent people, such as Icelandic Prime Minister Sigmundur Davíð Gunnlaugsson, used offshore entities for illegal activities. Perhaps the most famous of these […]
Read More